2,215 research outputs found

    The Intuitive Appeal of Explainable Machines

    Get PDF
    Algorithmic decision-making has become synonymous with inexplicable decision-making, but what makes algorithms so difficult to explain? This Article examines what sets machine learning apart from other ways of developing rules for decision-making and the problem these properties pose for explanation. We show that machine learning models can be both inscrutable and nonintuitive and that these are related, but distinct, properties. Calls for explanation have treated these problems as one and the same, but disentangling the two reveals that they demand very different responses. Dealing with inscrutability requires providing a sensible description of the rules; addressing nonintuitiveness requires providing a satisfying explanation for why the rules are what they are. Existing laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), as well as techniques within machine learning, are focused almost entirely on the problem of inscrutability. While such techniques could allow a machine learning system to comply with existing law, doing so may not help if the goal is to assess whether the basis for decision-making is normatively defensible. In most cases, intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation. But because machine learning is often valued for its ability to uncover statistical relationships that defy intuition, relying on intuition is not a satisfying approach. This Article thus argues for other mechanisms for normative evaluation. To know why the rules are what they are, one must seek explanations of the process behind a model’s development, not just explanations of the model itself

    Exploring Algorithmic Literacy for College Students: An Educator’s Roadmap

    Get PDF
    Research shows that college students are largely unaware of the impact of algorithms on their everyday lives. Also, most university students are not being taught about algorithms as part of the regular curriculum. This exploratory, qualitative study aimed to explore subject-matter experts’ insights and perceptions of the knowledge components, coping behaviors, and pedagogical considerations to aid faculty in teaching algorithmic literacy to college students. Eleven individual, semi-structured interviews and one focus group were conducted with scholars and teachers of critical algorithm studies and related fields. Findings suggested three sets of knowledge components that would contribute to students’ algorithmic literacy: general characteristics and distinguishing traits of algorithms, key domains in everyday life using algorithms (including the potential benefits and risks), and ethical considerations for the use and application of algorithms. Findings also suggested five behaviors that students could use to help them better cope with algorithmic systems and nine teaching strategies to help improve students’ algorithmic literacy. Suggestions also surfaced for alternative forms of assessment, potential placement in the curriculum, and how to distinguish between basic algorithmic awareness compared to algorithmic literacy. Recommendations for expanding on the current Association of College and Research Libraries’ Framework for Information Literacy for Higher Education (2016) to more explicitly include algorithmic literacy were presented

    Research Perspectives: The Rise of Human Machines: How Cognitive Computing Systems Challenge Assumptions of User-System Interaction

    Get PDF
    Cognitive computing systems (CCS) are a new class of computing systems that implement more human-like cognitive abilities. CCS are not a typical technological advancement but an unprecedented advance toward human-like systems fueled by artificial intelligence. Such systems can adapt to situations, perceive their environments, and interact with humans and other technologies. Due to these properties, CCS are already disrupting established industries, such as retail, insurance, and healthcare. As we make the case in this paper, the increasingly human-like capabilities of CCS challenge five fundamental assumptions that we as IS researchers have held about how users interact with IT artifacts. These assumptions pertain to (1) the direction of the user-artifact relationship, (2) the artifact’s awareness of its environment, (3) functional transparency, (4) reliability, and (5) the user’s awareness of artifact use. We argue that the disruption of these five assumptions limits the applicability of our extant body of knowledge to CCS. Consequently, CCS present a unique opportunity for novel theory development and associated contributions. We argue that IS is well positioned to take this opportunity and present research questions that, if answered, will lead to interesting, influential, and original theories

    ERP implementation methodologies and frameworks: a literature review

    Get PDF
    Enterprise Resource Planning (ERP) implementation is a complex and vibrant process, one that involves a combination of technological and organizational interactions. Often an ERP implementation project is the single largest IT project that an organization has ever launched and requires a mutual fit of system and organization. Also the concept of an ERP implementation supporting business processes across many different departments is not a generic, rigid and uniform concept and depends on variety of factors. As a result, the issues addressing the ERP implementation process have been one of the major concerns in industry. Therefore ERP implementation receives attention from practitioners and scholars and both, business as well as academic literature is abundant and not always very conclusive or coherent. However, research on ERP systems so far has been mainly focused on diffusion, use and impact issues. Less attention has been given to the methods used during the configuration and the implementation of ERP systems, even though they are commonly used in practice, they still remain largely unexplored and undocumented in Information Systems research. So, the academic relevance of this research is the contribution to the existing body of scientific knowledge. An annotated brief literature review is done in order to evaluate the current state of the existing academic literature. The purpose is to present a systematic overview of relevant ERP implementation methodologies and frameworks as a desire for achieving a better taxonomy of ERP implementation methodologies. This paper is useful to researchers who are interested in ERP implementation methodologies and frameworks. Results will serve as an input for a classification of the existing ERP implementation methodologies and frameworks. Also, this paper aims also at the professional ERP community involved in the process of ERP implementation by promoting a better understanding of ERP implementation methodologies and frameworks, its variety and history

    Colombia and the Intelligence Cycle in the 21st Century, the Digital Age

    Get PDF
    Luuretsükkel on luureinfo analüüsimise ja kogumise peamine protsess, mida kasutatakse\n\rkogu maailmas. Kuna see süsteem on vananenud, siis ei saa see lahendada neid ülesandeid,\n\rmida tehnoloogia areng ja digiajastu on kaasa toonud. Info liigub küberruumis.\n\rLuuretsükkel kasutab erinevaid luureinfo vorme, tarvitades otsingus, kogumises, analüüsis\n\rja levitamises kaasaegseid tehnoloogilisi vahendeid. Luures on teada ebaõnnestumisi, mis\n\rtulenesid sellest, et ei suudetud jälgida luuretsüklit info muutumise kiiruse või\n\rolemasolevatest tehnoloogilistest süsteemidest puuduliku teadlikkuse tõttu.\n\rLuureprotsessi tuleb integreerida tehnoloogia ja küberruumiga, et 21. sajandil luurevõimet\n\rarendada. On vaja kasutada kõiki ressursse ja integreerida kõiki\n\rolemasolevaid tehnoloogilisi allikaid põhilistest protsessidest alates.\n\rTäielik protsess, mis ühendab luureinfo saamise protsessi küberruumi ja infotehnoloogia\n\rkasutamisega, on vajalik selleks, et olemasolevat informatsiooni kasutada ja kindlustada.\n\rSee uurimistöö pakub uut, luure läbiviimiseks mõeldut mikrotsüklite protsessi. See koosneb\n\rviiest mikrotsüklist ja selle eesmärk on luure protsesside ja tehnoloogiate integreerimine, et\n\rsaada paremaid tulemusi 21. sajandi luure arengutes.The intelligence cycle is the main process in developing and obtaining intelligence used worldwide. Currently, it has problems and is outdated because it was not created to face the challenges that technology and the digital age have brought about. Information moves and travels in cyberspace, which are current as well as the future land of conflicts. The intelligence cycle is using technology systems through different forms of intelligence taking advantage of current technological developments for the search, collection, analysis and dissemination, but is not being fully exploited. Cases have been observed, where intelligence failed because of not following the intelligence cycle due to the speed of information or lack of knowledge of technological systems at the service of intelligence. The intelligence process must be integrated and work hand in hand with technology and the cyberspace, developing intelligence for the 21st century. It is necessary to use all resources and integrate all existing technological sources starting from the core of the process. \n\rA complete process that integrates the process of obtaining intelligence with the use and exploitation of cyberspace and information technology is required for increasing, securing and exploiting all available information. In the development of this thesis, a new process of micro cycles for intelligence has been developed. It consists of five micro cycles and its purpose is to integrate intelligence processes and technology for better results in this new era of intelligence development in 21st century

    Machines like us? An occupational health perspective on machine learning (artificial intelligence)

    Get PDF
    Summary Aim Focusing on management theories and occupational health and safety (OSH) principles, the use of machine learning (AI) may give benefits within risk management, economical costs, as well as prevent sickness absence and work-related illness. The purpose of this thesis was to develop an industry standard in the form of concrete advice for management and OSH services on how to avoid sickness absence related to the introduction and application of machine learning (AI). Method This thesis is a literature-based monography set in the Norwegian work-life context. An explorative, qualitative research design was chosen with a phenomenological approach. Three main theoretical perspectives were explored, namely the psychological, sociological and cognitive perspectives. In addition, a literature review was performed, and white spots on the theoretical map were defined. This set the theoretical framework for the empirical part of the study, consisting of two focus group interviews from two of Norway’s largest companies. Both companies were data driven and front runners in their fields and were chosen from a vendor (Kongsberg) and buyer (Equinor) perspective. This data was then triangulated against an in-depth interview from SINTEF, one of Europe’s largest research institutes. For external validity, thick description was added. Limitations may be the sample size in data collection and bias due to the companies chosen. Additionally, there may be researcher’s bias as a company doctor and leading advisor of a multi-disciplinary occupational health team for one of the companies interviewed. Contributions to state-of-the-art To the researcher’s knowledge, this may be the first study of its kind where international OSH advice and research findings from review articles (and a workshop summary) is combined with experience transfer from managers and employees in this phase of machine learning (AI). Findings show that to prevent machine learning (AI) related sickness absence, the decision makers will need to acquire related OSH competence, perform dynamic virtual task analyses and from the outset integrate OSH measures in early phase developments of machine learning (AI) by utilizing human centered design. The OSH services will need to work more cross-disciplinary again, develop one common language based on Human Factors and perform dynamic evaluations of occupational health risk. The tripartite dialogue with the safety delegates will need to be strengthened with the same competence

    A Gamefied Synthetic Environment for Evaluation of Counter-Disinformation Solutions

    Get PDF
    This paper presents a simulation-based approach to countering online dis/misinformation. This disruptive technology experiment incorporated a synthetic environment component, based on adapted SIR epidemiological model to evaluate and visualize the effectiveness of suggested solutions to the issue. The participants in the simulation were given a realistic scenario depicting a dis/misinformation threat and were asked to select a number of solutions, described in IoS (Ideas-of-Systems) cards. During the event, the qualitative and quantitative characteristics of the IoS cards, were tested in a synthetic environment (SEN), built after a Susceptible-Infected-Resistant (SIR) model. The participants, divided into teams, presented and justified their dis/misinformation strategy which included three IoS card selections. A jury of subject matter experts, announced the winning team, based on the merits of the proposed strategies and the compatibility of the different cards, grouped together
    • …
    corecore