1,832 research outputs found

    Developing unbiased artificial intelligence in recruitment and selection : a processual framework : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Management at Massey University, Albany, Auckland, New Zealand

    Get PDF
    For several generations, scientists have attempted to build enhanced intelligence into computer systems. Recently, progress in developing and implementing Artificial Intelligence (AI) has quickened. AI is now attracting the attention of business and government leaders as a potential way to optimise decisions and performance across all management levels from operational to strategic. One of the business areas where AI is being used widely is the Recruitment and Selection (R&S) process. However, in spite of this tremendous growth in interest in AI, there is a serious lack of understanding of the potential impact of AI on human life, society and culture. One of the most significant issues is the danger of biases being built into the gathering and analysis of data and subsequent decision-making. Cognitive biases occur in algorithmic models by reflecting the implicit values of the humans involved in defining, coding, collecting, selecting or using data to train the algorithm. The biases can then be self-reinforcing using machine learning, causing AI to engage in ‘biased’ decisions. In order to use AI systems to guide managers in making effective decisions, unbiased AI is required. This study adopted an exploratory and qualitative research design to explore potential biases in the R&S process and how cognitive biases can be mitigated in the development of AI-Recruitment Systems (AIRS). The classic grounded theory was used to guide the study design, data gathering and analysis. Thirty-nine HR managers and AI developers globally were interviewed. The findings empirically represent the development process of AIRS, as well as technical and non-technical techniques in each stage of the process to mitigate cognitive biases. The study contributes to the theory of information system design by explaining the phase of retraining that correlates with continuous mutability in developing AI. AI is developed through retraining the machine learning models as part of the development process, which shows the mutability of the system. The learning process over many training cycles improves the algorithms’ accuracy. This study also extends the knowledge sharing concepts by highlighting the importance of HR managers’ and AI developers’ cross-functional knowledge sharing to mitigate cognitive biases in developing AIRS. Knowledge sharing in developing AIRS can occur in understanding the essential criteria for each job position, preparing datasets for training ML models, testing ML models, and giving feedback, retraining, and improving ML models. Finally, this study contributes to our understanding of the concept of AI transparency by identifying two known cognitive biases similar-to-me bias and stereotype bias in the R&S process that assist in assessing the ML model outcome. In addition, the AIRS process model provides a good understanding of data collection, data preparation and training and retraining the ML model and indicates the role of HR managers and AI developers to mitigate biases and their accountability for AIRS decisions. The development process of unbiased AIRS offers significant implications for the human resource field as well as other fields/industries where AI is used today, such as the education system and insurance services, to mitigate cognitive biases in the development process of AI. In addition, this study provides information about the limitations of AI systems and educates human decision makers (i.e. HR managers) to avoid building biases into their systems in the first place

    Understanding Concept Maps: A Closer Look at How People Organise Ideas

    Get PDF

    Prisons, Genres, and Big Data: Understanding the Language of Corrections in America\u27s Prisons

    Get PDF
    This dissertation seeks to answer one fundamental question: How can I as a researcher conduct social justice research that is ethical, durable, and portable? As social justice research becomes more prominent in the field of technical and professional communication, ethical research practices must be maintained to avoid an unintentional wounding of the subjects for whom researchers hope to advocate. The dissertation is divided into five sections, each written as a stand-alone article that builds on the principles of the section before it. Each section addresses a key question: 1) How do I ethically engage in social justice research? 2) How do I ethically engage with big data and algorithmic rhetorics? 3) How do I frame my research to have the most impact outside my home discipline? 4) What does an ethical, computational content analysis look like? 5) How do these principles translate into the classroom? Together, these articles identify a methodology called Institutional Genre Analysis, which focuses on text as data that was produced by an institution rather than individual users, avoiding many of the pitfalls of big data research while providing a means for what Vitanza calls “intellectual guerilla warfare conducted by [marginalized individuals]” (1987, p. 52)

    Beyond written computation

    Get PDF
    This collection of papers based on research into aspects of number is a result of a writing conference held on Rottnest Island, near Perth, Western Australia. The concept of the conference emanated from Alistair Mcintosh and Len Sparrow and was based on two similar meetings organised by Cal Irons and Bob Reys. All papers in this book were discussed at the Rottnest conference and subsequent changes were made by the authors based on comments and recommendations from the peer group who attended the conference

    A learning theory approach to students' misconceptions in calculus

    Get PDF
    Bibliography: leaves 129-138.This study analyses students' errors in calculus through the lens of learning theories. The subjects in this study were 117 students enrolled in a calculus course for students from disadvantaged educational backgrounds at the University of Cape Town. A coding scheme to categorise the errors that these students made in the final examination was developed. This categorisation was supported by error data generated through the administration of a conceptual test and follow-up interviews. The pattern of errors in the coding scheme suggests that the students' perception of algebra is largely that of a "game of letters". As a result of this their construction of calculus knowledge is based on the rehearsal of algorithmic procedures. Their errors indicate that they develop linking and extending mechanisms to deal with the multiplicity of rules that are generated from this process of rehearsal

    Metalogic and the psychology of reasoning.

    Get PDF
    The central topic of the thesis is the relationship between logic and the cognitive psychology of reasoning. This topic is treated in large part through a detailed examination of the recent work of P. N. Johnson-Laird, who has elaborated a widely-read and influential theory in the field. The thesis is divided into two parts, of which the first is a more general and philosophical coverage of some of the most central issues to be faced in relating psychology to logic, while the second draws upon this as introductory material for a critique of Johnson-Laird's `Mental Model' theory, particularly as it applies to syllogistic reasoning. An approach similar to Johnson-Laird's is taken to cognitive psychology, which centrally involves the notion of computation. On this view, a cognitive model presupposes an algorithm which can be seen as specifying the behaviour of a system in ideal conditions. Such behaviour is closely related to the notion of `competence' in reasoning, and this in turn is often described in terms of logic. Insofar as a logic is taken to specify the competence of reasoners in some domain, it forms a set of conditions on the 'input-output' behaviour of the system, to be accounted for by the algorithm. Cognitive models, however, must also be subjected to empirical test, and indeed are commonly built in a highly empirical manner. A strain can therefore develop between the empirical and the logical pressures on a theory of reasoning. Cognitive theories thus become entangled in a web of recently much-discussed issues concerning the rationality of human reasoners and the justification of a logic as a normative system. There has been an increased interest in the view that logic is subject to revision and development, in which there is a recognised place for the influence of psychological investigation. It is held, in this thesis, that logic and psychology are revealed by these considerations to be interdetermining in interesting ways, under the general a priori requirement that people are in an important and particular sense rational. Johnson-Laird's theory is a paradigm case of the sort of cognitive theory dealt with here. It is especially significant in view of the strong claims he makes about its relation to logic, and the role the latter plays in its justification and in its interpretation. The theory is claimed to be revealing about fundamental issues in semantics, and the nature of rationality. These claims are examined in detail, and several crucial ones refuted. Johnson- Laird's models are found to be wanting in the level of empirical support provided, and in their ability to found the considerable structure of explanation they are required to bear. They fail, most importantly, to be distinguishable from certain other kinds of models, at a level of theory where the putative differences are critical. The conclusion to be drawn is that the difficulties in this field are not yet properly appreciated. Psychological explantion requires a complexity which is hard to reconcile with the clarity and simplicity required for logical insights
    corecore