294,567 research outputs found

    Machine Decisions and Human Consequences

    Full text link
    As we increasingly delegate decision-making to algorithms, whether directly or indirectly, important questions emerge in circumstances where those decisions have direct consequences for individual rights and personal opportunities, as well as for the collective good. A key problem for policymakers is that the social implications of these new methods can only be grasped if there is an adequate comprehension of their general technical underpinnings. The discussion here focuses primarily on the case of enforcement decisions in the criminal justice system, but draws on similar situations emerging from other algorithms utilised in controlling access to opportunities, to explain how machine learning works and, as a result, how decisions are made by modern intelligent algorithms or 'classifiers'. It examines the key aspects of the performance of classifiers, including how classifiers learn, the fact that they operate on the basis of correlation rather than causation, and that the term 'bias' in machine learning has a different meaning to common usage. An example of a real world 'classifier', the Harm Assessment Risk Tool (HART), is examined, through identification of its technical features: the classification method, the training data and the test data, the features and the labels, validation and performance measures. Four normative benchmarks are then considered by reference to HART: (a) prediction accuracy (b) fairness and equality before the law (c) transparency and accountability (d) informational privacy and freedom of expression, in order to demonstrate how its technical features have important normative dimensions that bear directly on the extent to which the system can be regarded as a viable and legitimate support for, or even alternative to, existing human decision-makers

    Bio-signals application in solution of human- machine interface

    Get PDF
    The article deals with the field called Human Machine Interface. It is targeted on the characteristics of the bio signals and their evaluation. Based on the results of cerebral and cephalic bio signals analysis, it is possible to evaluate the immediate physical and psychical condition of the operator of the controlling process. Timely analysis of the bio signals allows avoiding the negative consequences of the incorrect decisions resulting from the attention fatigue

    Algorithmic Human Resources Management – Perspectives and Challenges

    Get PDF
    Theoretical background: Technology – most notably processes of digitalisation, the use of artificial in telligence, machine learning, big data and prevalence of remote work due to pandemic – changes the way organizations manage human resources. One of the increasing trends is the use of so-called “algorithmic management”. It is notably different than previous e-HRM or HRIS (human resources information systems) applications, as it automates HR-related duties. Algorithms, being autonomous computational formulae, are considered objective and mathematically correct decision-making mechanisms. Limiting human in volvement and oversight of the labour process might lead to serious ethical and managerial challenges. Many areas – previously being sole responsibility of managers (including HR managers), like employment relations, hiring, performance management, remuneration – are increasingly affected, or even taken over, by algorithmic management.Purpose of the article: The purpose of this article is to review the development, perspectives and challenges (including possible biases and ethical considerations) of algorithmic human resources management. This novel approach is fuelled by the speeding processes of digitalisation, the use of artificial intelligence, big data and increased analytical capabilities and applications used by contemporary companies. Algorithms are formulas that autonomously make decisions based on statistical models or decision rules without human intervention. Therefore, the use of algorithmic HRM automates decision-making processes and duties of human resources managers, thereby limiting human involvement and oversight, which can have negative consequences for the organization.Research methods: The article provides a critical literature review of theoretical sources and empirical evidence on the application of algorithmic human resources management practices. Scientific journals in the field of human resources management and technology applications have been reviewed, as well as research reports from academic institutions and renowned international organizations.Main findings: Applications of algorithmic human resources management are an emerging field of study that is currently not extensively researched. Little is known about the scale of use as well as consequences of this more automated approach to manage human work. Scarce evidence suggests possible negative con sequences, including ethical concerns, biases leading to discriminatory decisions and adverse employees’ reactions to decisions based on algorithms. After the review of possible future developments and challenges connected to algorithmic HRM, this article proposed actions aimed at re-humanisation of the approach to managerial decision-making with the support of algorithms, ensuring transparency of the algorithms construction and functionalities, and increasing reliability and reduction of possible biases.Theoretical background: Technology – most notably processes of digitalisation, the use of artificial intelligence, machine learning, big data and prevalence of remote work due to pandemic – changes the way organizations manage human resources. One of the increasing trends is the use of so-called “algorithmic management”. It is notably different than previous e-HRM or HRIS (human resources information systems) applications, as it automates HR-related duties. Algorithms, being autonomous computational formulae, are considered objective and mathematically correct decision making mechanisms. Limiting human involvement and oversight of the labour process might lead to serious ethical and managerial challenges. Many areas – previously being sole responsibility of managers (including HR managers), like employment relations, hiring, performance management, remuneration – are increasingly affected, or even taken over, by algorithmic management.Purpose of the article: The purpose of this article is to review the development, perspectives and challenges (including possible biases and ethical considerations) of algorithmic human resources management. This novel approach is fuelled by the speeding processes of digitalisation, the use of artificial intelligence, big data and increased analytical capabilities and applications used by contemporary companies. Algorithms are formulas that autonomously make decisions based on statistical models or decision rules without human intervention. Therefore, the use of algorithmic HRM automates decision-making processes and duties of human resources managers, thereby limiting human involvement and oversight, which can have negative consequences for the organization.Research methods: The article provides a critical literature review of theoretical sources and empirical evidence on the application of algorithmic human resources management practices. Scientific journals in the field of human resources management and technology applications have been reviewed, as well as research reports from academic institutions and renowned international organizations.Main findings: Applications of algorithmic human resources management are an emerging field of study that is currently not extensively researched. Little is known about the scale of use as well as consequences of this more automated approach to manage human work. Scarce evidence suggests possible negative consequences, including ethical concerns, biases leading to discriminatory decisions and adverse employees’ reactions to decisions based on algorithms. After the review of possible future developments and challenges connected to algorithmic HRM, this article proposed actions aimed at re-humanisation of the approach to managerial decision-making with the support of algorithms, ensuring transparency of the algorithms construction and functionalities, and increasing reliability and reduction of possible biases

    Algorithmic Assistance with Recommendation-Dependent Preferences

    Full text link
    When an algorithm provides risk assessments, we typically think of them as helpful inputs to human decisions, such as when risk scores are presented to judges or doctors. However, a decision-maker may not only react to the information provided by the algorithm. The decision-maker may also view the algorithmic recommendation as a default action, making it costly for them to deviate, such as when a judge is reluctant to overrule a high-risk assessment for a defendant or a doctor fears the consequences of deviating from recommended procedures. To address such unintended consequences of algorithmic assistance, we propose a principal-agent model of joint human-machine decision-making. Within this model, we consider the effect and design of algorithmic recommendations when they affect choices not just by shifting beliefs, but also by altering preferences. We motivate this assumption from institutional factors, such as a desire to avoid audits, as well as from well-established models in behavioral science that predict loss aversion relative to a reference point, which here is set by the algorithm. We show that recommendation-dependent preferences create inefficiencies where the decision-maker is overly responsive to the recommendation. As a potential remedy, we discuss algorithms that strategically withhold recommendations, and show how they can improve the quality of final decisions

    Exploring Potential Flaws and Dangers Involving Machine Learning Technology

    Get PDF
    This paper seeks to explore the ways in which machine learning and AI may influence the world in the future and the potential for the technology to be misused or exploited. In 1959 Arthur Samuel defined machine learning as “the field of study that gives computers the ability to learn without being explicitly programmed” (Munoz). This paper will also seek to find out if there is merit to the current worry that robots will take over some jobs based in cognitive abilities. In the past, a human was required to perform these jobs, but with the rise of more complex automation a person may not be necessary. Many of the sources cited throughout this paper focus on the innovation of machine learning and AI and how dangerous the over automation of the world could be. Machine learning and the resulting AI’s have their place in the world and more than likely they will do nothing but push the world towards a more fruitful future. Looking at potential risks of letting lines of code make important decisions is crucial given the consequences that negligence can have. There is a need to explore these topics because losing the human element in decision making can have some big implications if the AI is not programmed correctly. Machine learning has one of the greatest opportunities to impact the world. The need for caution however cannot be understated because of the potential dangers it may pose to jobs, security, and the overall stability of an ever changing world

    Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR

    Get PDF
    There has been much discussion of the right to explanation in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the black box of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decision-making systems and their rationale in specific cases is a technically challenging problem. Some explanations may offer little meaningful information to data subjects, raising questions around their value. Explanations of automated decisions need not hinge on the general public understanding how algorithmic systems function. Even though such interpretability is of great importance and should be pursued, explanations can, in principle, be offered without opening the black box. Looking at explanations as a means to help a data subject act rather than merely understand, one could gauge the scope and content of explanations according to the specific goal or action they are intended to support. From the perspective of individuals affected by automated decision-making, we propose three aims for explanations: (1) to inform and help the individual understand why a particular decision was reached, (2) to provide grounds to contest the decision if the outcome is undesired, and (3) to understand what would need to change in order to receive a desired result in the future, based on the current decision-making model. We assess how each of these goals finds support in the GDPR. We suggest data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims. These counterfactual explanations describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system

    Fairness Implications of Heterogeneous Treatment Effect Estimation with Machine Learning Methods in Policy-making

    Full text link
    Causal machine learning methods which flexibly generate heterogeneous treatment effect estimates could be very useful tools for governments trying to make and implement policy. However, as the critical artificial intelligence literature has shown, governments must be very careful of unintended consequences when using machine learning models. One way to try and protect against unintended bad outcomes is with AI Fairness methods which seek to create machine learning models where sensitive variables like race or gender do not influence outcomes. In this paper we argue that standard AI Fairness approaches developed for predictive machine learning are not suitable for all causal machine learning applications because causal machine learning generally (at least so far) uses modelling to inform a human who is the ultimate decision-maker while AI Fairness approaches assume a model that is making decisions directly. We define these scenarios as indirect and direct decision-making respectively and suggest that policy-making is best seen as a joint decision where the causal machine learning model usually only has indirect power. We lay out a definition of fairness for this scenario - a model that provides the information a decision-maker needs to accurately make a value judgement about just policy outcomes - and argue that the complexity of causal machine learning models can make this difficult to achieve. The solution here is not traditional AI Fairness adjustments, but careful modelling and awareness of some of the decision-making biases that these methods might encourage which we describe.Comment: 13 pages, 1 figur
    corecore