2,811 research outputs found

    Understanding the Role of Data Analytics in Driving Discriminatory Managerial Decisions

    Get PDF
    Data analytics has been accused of contributing to discriminatory managerial decisions in organizations’ marketing strategies. To date, most studies have focused on the technical antecedents of such discriminations and, therefore, little is known about the role of human factors in making these discriminatory decisions. This work-in-progress study aims at addressing this gap by opening the black box between data analytics use in organizations and making discriminatory decisions. Drawing upon the theory of moral disengagement, we posit that four dimensions of moral disengagement, namely, dehumanization, euphemistic labeling, displacement of responsibility, and disregard of consequences are the mechanisms through which the use of data analytics tools in organizations could bring about discriminatory decisions. Moreover, data size and employees’ competency are discussed as having moderating impacts on some of these mechanisms. A survey-based methodology to empirically validate the proposed model is outlined. Potential contributions to theory and practice are delineated

    Demographic Transparency to Combat Data Analytics Discriminatory Recommendations

    Get PDF
    Data Analytics (DA) has been blamed for contributing to discriminatory managerial decisions in organizations. To date, most studies have focused on the technical antecedents of such discriminations. As a result, little is known about how to ameliorate the problem by focusing on the human aspects of decision making when using DA in organizational settings. This study represents an effort to address this gap. Drawing on the cognitive elaboration model of ethical decision-making, construal level theory, and the literature on moral intensity, this study investigates how the availability and the design of demographic transparency (a form of decisional guidance) can lower DA users’ likelihood of agreement with discriminatory recommendations of DA tools. In addition, this study examines the role of user’s mindfulness and organizational ethical culture on this process. This paper outlines an experimental methodology to empirically validate the proposed model and hypotheses and delineates potential contributions to theory and practice

    CAHRS hrSpectrum (January - February 2008)

    Get PDF
    HRSpec2008_02.pdf: 111 downloads, before Oct. 1, 2020

    Can the Use of Data Analytics Tools Lead to Discriminatory Decisions?

    Get PDF
    Data Analytics (DA) has been criticized for contributing to discriminatory decisions in organizations. To date, several studies have investigated reasons for the generation of discriminatory recommendations by DA tools and how to ameliorate the issue. Nonetheless, recent studies by researchers, practitioners, and government agencies show that despite the progress made, the issue has not been eliminated. As a result, it is crucial for DA users to be vigilant about the danger of discriminatory recommendations generated by DA tools. This study represents an effort to provide empirical evidence about whether and to what extent decision makers will readily accept a discriminatory DA recommendation and about the cognition and attitudes that are associated with this behavior. The results obtained from an empirical study confirms that a majority of users readily accepted a discriminatory recommendation and sheds light on what factors influence this acceptance

    Not seeing the (moral) forest for the trees? How task complexity and employees’ expertise affect moral disengagement with discriminatory data analytics recommendations

    Get PDF
    Data analytics provides versatile decision support to help employees tackle the rising complexity of today’s business decisions. Notwithstanding the benefits of these systems, research has shown their potential for provoking discriminatory decisions. While technical causes have been studied, the human side has been mostly neglected, albeit employees mostly still need to decide to turn analytics recommendations into actions. Drawing upon theories of technology dominance and of moral disengagement, we investigate how task complexity and employees’ expertise affect the approval of discriminatory data analytics recommendations. Through two online experiments, we confirm the important role of advantageous comparison, displacement of responsibility, and dehumanization, as the cognitive moral disengagement mechanisms that facilitate such approvals. While task complexity generally enhances these mechanisms, expertise retains a critical role in analytics-supported decision-making processes. Importantly, we find that task complexity’s effects on users’ dehumanization vary: more data subjects increase dehumanization, whereas richer information on subjects has the opposite effect. By identifying the cognitive mechanisms that facilitate approvals of discriminatory data analytics recommendations, this study contributes toward designing tools, methods, and practices that combat unethical consequences of using these systems

    Three Issues with the State of People and Workplace Analytics

    Get PDF
    People and workplace analytics is a hype topic. It depicts information systems and processes for data-driven decisionmaking that concern people-related organizational outcomes. The topic is driven by practitioners with only scarce academic backing. We outline three challenges for the field of people and workplace analytics: first, ambiguity in definitions and conceptions, second, sparse research as well as a lack of scientific evidence for the espoused value propositions, and third, a lack of strong theoretical foundation. To address these challenges, we propose a categorization schema, grounded in existing research on management information systems and tailored to people and workplace analytics. The schema helps to identify the prevalent conceptions on people and workplace analytics, and to clarify the elicited gaps in understanding

    Algorithmic Bosses Can’t Lie! How to Foster Transparency and Limit Abuses of the New Algorithmic Managers

    Get PDF
    Technology is changing the way entrepreneurs manage their human resources. Many employers have already started to dismiss the completely human exercise of their managerial prerogatives, totally or partially delegating them to more or less smart machines. Data collected through people or workforce analytics practices are the fuel to fill the tank of algorithmic management tools, which are capable of taking automated decisions affecting the workforce. Notwithstanding the advantages in terms of increased labour productivity, recurring to technology is not always risk-free. It has already happened, also in the HR management context, that algorithms have revealed themselves as biased decision-makers. This problem has often been exacerbated by the lack of transparency characterising most part of automated decision-making processes. Moreover, this issue is worse in the employment context because it increases the already existent information asymmetries between entrepreneurs and workers. These are the main reasons why it has been underlined how workforce analytics and algorithmic management practices may implicate an augmentation of managerial prerogatives unheard in the past. It has also been stressed that this should entail an update – or even a rethinking – of employment laws that, as they are today, may be inadequate to address the issues posed by the technological revolution. This paper tries thus to understand, mainly looking at the Italian and other EU civil-law based legal systems, whether there are rules that may foster transparency and prevent abuses of employers’ managerial prerogatives potentially arising from the increasing recourse to algorithmic management practices. More specifically, this article points to three types of regulatory techniques that may alleviate the abovementioned issues. These three regulatory techniques are: a) information and access rights, to be exercised before a claim has been brought with a view to gather evidence to be used within a trial; b) rules that, within a trial, totally or partially switch the burden of proof to the employer; and c) rules that, within a trial, grant employment judges with broad powers to gather evidence. All these regulatory techniques strongly incentivise employers to recur to only those algorithmic tools with a decision-making process that can potentially be made transparent to their employees and, in case of a trial, to employment judges. Therefore, the employment legal system already knows how to foster transparency in the workplace and consequently uncover the violation of rules that already limit abuses of managerial prerogatives by employers. In light of the pervasive use of new technological tools to manage human resources, a more massive recourse to these regulatory antibodies can constitute an effective policy recommendation to better face the challenges posed by the algorithmic revolution
    • 

    corecore