2,451 research outputs found

    Does AI Qualify for the Job?: A Bidirectional Model Mapping Labour and AI Intensities

    Full text link
    [EN] In this paper we present a setting for examining the relation between the distribution of research intensity in AI research and the relevance for a range of work tasks (and occupations) in current and simulated scenarios. We perform a mapping between labour and AI using a set of cognitive abilities as an intermediate layer. This setting favours a two-way interpretation to analyse (1) what impact current or simulated AI research activity has or would have on labour-related tasks and occupations, and (2) what areas of AI research activity would be responsible for a desired or undesired effect on specific labour tasks and occupations. Concretely, in our analysis we map 59 generic labour-related tasks from several worker surveys and databases to 14 cognitive abilities from the cognitive science literature, and these to a comprehensive list of 328 AI benchmarks used to evaluate progress in AI techniques. We provide this model and its implementation as a tool for simulations. We also show the effectiveness of our setting with some illustrative examples.This material is based upon work supported by the EU (FEDER), and the Spanish MINECO under grant RTI2018-094403-B-C3, the Generalitat Valenciana PROMETEO/2019/098. F. Martínez-Plumed was also supported by INCIBE (Ayudas para la excelencia de los equipos de investigación avanzada en ciberseguridad), the European Commission (JRC) HUMAINT project (CT-EX2018D335821-101), and UPV (PAID-06-18). J. H-Orallo is also funded by an FLI grant RFP2-152.Martínez-Plumed, F.; Tolan, S.; Pesole, A.; Hernández-Orallo, J.; Fernández-Macías, E.; Gómez, E. (2020). Does AI Qualify for the Job?: A Bidirectional Model Mapping Labour and AI Intensities. Association for Computing Machinery (ACM). 94-100. https://doi.org/10.1145/3375627.3375831S9410

    Redefining Creativity in the Era of AI? Perspectives of Computer Scientists and New Media Artists

    Get PDF
    Artificial intelligence (AI) has breached creativity research. The advancements of creative AI systems dispute the common definitions of creativity that have traditionally focused on five elements: actor, process, outcome, domain, and space. Moreover, creative workers, such as scientists and artists, increasingly use AI in their creative processes, and the concept of co-creativity has emerged to describe blended human–AI creativity. These issues evoke the question of whether creativity requires redefinition in the era of AI. Currently, co-creativity is mostly studied within the framework of computer science in pre-organized laboratory settings. This study contributes from a human scientific perspective with 52 interviews of Finland-based computer scientists and new media artists who use AI in their work. The results suggest scientists and artists use similar elements to define creativity. However, the role of AI differs between the scientific and artistic creative processes. Scientists need AI to produce accurate and trustworthy outcomes, whereas artists use AI to explore and play. Unlike the scientists, some artists also considered their work with AI co-creative. We suggest that co-creativity can explain the contemporary creative processes in the era of AI and should be the focal point of future creativity research.© 2022 The Author(s). Published with license by Taylor & Francis Group, LLC. This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way.fi=vertaisarvioitu|en=peerReviewed

    Measuring the Occupational Impact of AI: Tasks, Cognitive Abilities and AI Benchmarks

    Full text link
    [EN] In this paper we develop a framework for analysing the impact of Artificial Intelligence (AI) on occupations. This framework maps 59 generic tasks from worker surveys and an occupational database to 14 cognitive abilities (that we extract from the cognitive science literature) and these to a comprehensive list of 328 AI benchmarks used to evaluate research intensity across a broad range of different AI areas. The use of cognitive abilities as an intermediate layer, instead of mapping work tasks to AI benchmarks directly, allows for an identification of potential AI exposure for tasks for which AI applications have not been explicitly created. An application of our framework to occupational databases gives insights into the abilities through which AI is most likely to affect jobs and allows for a ranking of occupations with respect to AI exposure. Moreover, we show that some jobs that were not known to be affected by previous waves of automation may now be subject to higher AI exposure. Finally, we find that some of the abilities where AI research is currently very intense are linked to tasks with comparatively limited labour input in the labour markets of advanced economies (e.g., visual and auditory processing using deep learning, and sensorimotor interaction through (deep) reinforcement learning).Tolan, S.; Pesole, A.; Martínez-Plumed, F.; Fernández-Macías, E.; Hernández-Orallo, J.; Gómez, E. (2021). Measuring the Occupational Impact of AI: Tasks, Cognitive Abilities and AI Benchmarks. Journal of Artificial Intelligence Research. 71:191-236. https://doi.org/10.1613/jair.1.12647S1912367

    Tracing the Mediating Contexts of Disciplinary Writing Instruction from Professional Development to Classrooms.

    Full text link
    Despite the push towards educational reform for disciplinary writing instruction, attention to the role of professional development in achieving those reforms is poorly understood. To understand the experiences of subject area teachers learning to teach disciplinary writing, I conducted a qualitative study examining a professional development program called the “Writing Group” (WG), which was designed to prepare teachers to teach disciplinary writing through collaborative learning experiences. The WG was comprised of a multidisciplinary team of teachers (n participants= 267) that participated in professional development involving three-day trainings and monthly meetings. I studied the context of the professional development and followed four focal teachers into their classrooms to study how teachers enacted what they learned in the professional development. I engaged in constant comparative analysis the professional development and instructional data to find patterns within and across the data sources and to triangulate my findings. Analyses revealed that the contexts of the WG and teachers’ schools mediated teachers’ understandings of disciplinary literacy instruction and how they taught writing. The WG gravitated toward general perspectives on writing in part because teachers conflated interdisciplinary approaches to writing instruction with generic approaches, which did not fully meet the teachers’ instructional needs. Even so, teachers were highly invested in the professional development because they felt a sense of solidarity and ownership of their teaching within the WG community. Furthermore, the general approaches to writing fed into teachers’ instruction, even as the pressures of teaching bolstered teachers’ use of generic writing strategies and their loyalty to the WG. The teachers found solace in the WG as a protected space away from these pressures of teaching, so they remained positive of the WG and did not critique it or its members for fear of jeopardizing the organization, even if teachers enacted strategies with limited success. My findings suggest that disciplinary writing instruction is challenging and requires disciplinary understanding as well as literacy expertise. Further examination of the structures of professional development is necessary to understand how to honor teachers’ expertise while also leaving room for members to productively critique each other and grow.PhDEducational StudiesUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120738/1/mmnguyen_1.pd

    Technology for the Future: In-Space Technology Experiments Program, part 2

    Get PDF
    The purpose of the Office of Aeronautics and Space Technology (OAST) In-Space Technology Experiments Program In-STEP 1988 Workshop was to identify and prioritize technologies that are critical for future national space programs and require validation in the space environment, and review current NASA (In-Reach) and industry/ university (Out-Reach) experiments. A prioritized list of the critical technology needs was developed for the following eight disciplines: structures; environmental effects; power systems and thermal management; fluid management and propulsion systems; automation and robotics; sensors and information systems; in-space systems; and humans in space. This is part two of two parts and contains the critical technology presentations for the eight theme elements and a summary listing of critical space technology needs for each theme

    The Survey, Taxonomy, and Future Directions of Trustworthy AI: A Meta Decision of Strategic Decisions

    Full text link
    When making strategic decisions, we are often confronted with overwhelming information to process. The situation can be further complicated when some pieces of evidence are contradicted each other or paradoxical. The challenge then becomes how to determine which information is useful and which ones should be eliminated. This process is known as meta-decision. Likewise, when it comes to using Artificial Intelligence (AI) systems for strategic decision-making, placing trust in the AI itself becomes a meta-decision, given that many AI systems are viewed as opaque "black boxes" that process large amounts of data. Trusting an opaque system involves deciding on the level of Trustworthy AI (TAI). We propose a new approach to address this issue by introducing a novel taxonomy or framework of TAI, which encompasses three crucial domains: articulate, authentic, and basic for different levels of trust. To underpin these domains, we create ten dimensions to measure trust: explainability/transparency, fairness/diversity, generalizability, privacy, data governance, safety/robustness, accountability, reproducibility, reliability, and sustainability. We aim to use this taxonomy to conduct a comprehensive survey and explore different TAI approaches from a strategic decision-making perspective

    Integration of decision support systems to improve decision support performance

    Get PDF
    Decision support system (DSS) is a well-established research and development area. Traditional isolated, stand-alone DSS has been recently facing new challenges. In order to improve the performance of DSS to meet the challenges, research has been actively carried out to develop integrated decision support systems (IDSS). This paper reviews the current research efforts with regard to the development of IDSS. The focus of the paper is on the integration aspect for IDSS through multiple perspectives, and the technologies that support this integration. More than 100 papers and software systems are discussed. Current research efforts and the development status of IDSS are explained, compared and classified. In addition, future trends and challenges in integration are outlined. The paper concludes that by addressing integration, better support will be provided to decision makers, with the expectation of both better decisions and improved decision making processes
    corecore