48 research outputs found

    Report from the STEM 2026 Workshop on Assessment, Evaluation, and Accreditation

    Get PDF
    A gathering of science, technology, engineering, and math (STEM) higher education stakeholders met in November 2018 to consider the relationship between innovation in education and assessment. When we talk about assessment in higher education, it is inextricably linked to both evaluation and accreditation, so all three were considered. The first question we asked was can we build a nation of learners? This starts with considering the student, first and foremost. As educators, this is a foundation of our exploration and makes our values transparent. As educators, how do we know we are having an impact? As members and implementers of institutions, programs and professional societies, how do we know students are learning and that what they are learning has value? The focus of this conversation was on undergraduate learning, although we acknowledge that the topic is closely tied to successful primary and secondary learning as well as graduate education. Within the realm of undergraduate education, students can experience four-year institutions and two-year institutions, with many students learning at both at different times. Thirty-seven participants spent two days considering cases of innovation in STEM education, learning about the best practices in assessment, and then discussing the relationship of innovation and assessment at multiple levels within the context of higher education. Six working groups looked at course-level, program-level, and institution-level assessment, as well as cross-disciplinary programs, large-scale policy issues, and the difficult-to-name “non-content/cross-content” group that looked at assessment of transferable skills and attributes like professional skills, scientific thinking, mindset, and identity, all of which are related to post-baccalaureate success. These conversations addressed issues that cut across multiple levels, disciplines, and course topics, or are otherwise seen as tangential or perpendicular to perhaps “required” assessment at institutional, programmatic, or course levels. This report presents the context, recommendations, and “wicked” challenges from the meeting participants and their working groups. Along with the recommendations of workshop participants, these intricate challenges weave a complex web of issues that collectively need to be addressed by our community. They generated a great deal of interest and engagement from workshop participants, and act as a call to continue these conversations and seek answers that will improve STEM education through innovation and improved assessment. This material is based upon work supported by the National Science Foundation under Grant No. DUE-1843775. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation

    Report from the STEM 2026 Workshop on Assessment, Evaluation, and Accreditation

    Get PDF
    A gathering of science, technology, engineering, and math (STEM) higher education stakeholders met in November 2018 to consider the relationship between innovation in education and assessment. When we talk about assessment in higher education, it is inextricably linked to both evaluation and accreditation, so all three were considered. The first question we asked was can we build a nation of learners? This starts with considering the student, first and foremost. As educators, this is a foundation of our exploration and makes our values transparent. As educators, how do we know we are having an impact? As members and implementers of institutions, programs and professional societies, how do we know students are learning and that what they are learning has value? The focus of this conversation was on undergraduate learning, although we acknowledge that the topic is closely tied to successful primary and secondary learning as well as graduate education. Within the realm of undergraduate education, students can experience four-year institutions and two-year institutions, with many students learning at both at different times. Thirty-seven participants spent two days considering cases of innovation in STEM education, learning about the best practices in assessment, and then discussing the relationship of innovation and assessment at multiple levels within the context of higher education. Six working groups looked at course-level, program-level, and institution-level assessment, as well as cross-disciplinary programs, large-scale policy issues, and the difficult-to-name “non-content/cross-content” group that looked at assessment of transferable skills and attributes like professional skills, scientific thinking, mindset, and identity, all of which are related to post-baccalaureate success. These conversations addressed issues that cut across multiple levels, disciplines, and course topics, or are otherwise seen as tangential or perpendicular to perhaps “required” assessment at institutional, programmatic, or course levels. This report presents the context, recommendations, and “wicked” challenges from the meeting participants and their working groups. Along with the recommendations of workshop participants, these intricate challenges weave a complex web of issues that collectively need to be addressed by our community. They generated a great deal of interest and engagement from workshop participants, and act as a call to continue these conversations and seek answers that will improve STEM education through innovation and improved assessment. This material is based upon work supported by the National Science Foundation under Grant No. DUE-1843775. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation

    Proceedings of the 8th QS-APPLE Conference Bali, 14th -16th November, 2012

    Get PDF
    This volume is a post-conference publication containing the refereed papers from the QS-APPLE Conference held in Bali from 14th-16th November, 2012. You will note some variation in referencing styles since the conference draws from academics who work in all discipline areas across tertiary institutions

    Digital Disruption in Teaching and Testing

    Get PDF
    This book provides a significant contribution to the increasing conversation concerning the place of big data in education. Offering a multidisciplinary approach with a diversity of perspectives from international scholars and industry experts, chapter authors engage in both research- and industry-informed discussions and analyses on the place of big data in education, particularly as it pertains to large-scale and ongoing assessment practices moving into the digital space. This volume offers an innovative, practical, and international view of the future of current opportunities and challenges in education and the place of assessment in this context

    Wiring the Writing Center

    Get PDF
    As computers have brought important developments to composition studies, writing centers have found themselves creating and improvising applications for their own work and often for the writing programs and institutions in which they live. Online tutorials, websites with an array of downloadable resources for students, scheduling and email possibilities--all of these are becoming common-place among writing centers across the country. However, in spite of impressive work by individual centers, exchange on these topics between and among writing centers has been sporadic. As more writing centers approach getting wired and others continue to upgrade, the need for communication and collaboration becomes ever more obvious, and so does the need to understand theoretical implications of choices made.https://digitalcommons.usu.edu/usupress_pubs/1122/thumbnail.jp

    “So what if ChatGPT wrote it?”:Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy

    Get PDF
    Transformative artificially intelligent tools, such as ChatGPT, designed to generate sophisticated text indistinguishable from that produced by a human, are applicable across a wide range of contexts. The technology presents opportunities as well as, often ethical and legal, challenges, and has the potential for both positive and negative impacts for organisations, society, and individuals. Offering multi-disciplinary insight into some of these, this article brings together 43 contributions from experts in fields such as computer science, marketing, information systems, education, policy, hospitality and tourism, management, publishing, and nursing. The contributors acknowledge ChatGPT’s capabilities to enhance productivity and suggest that it is likely to offer significant gains in the banking, hospitality and tourism, and information technology industries, and enhance business activities, such as management and marketing. Nevertheless, they also consider its limitations, disruptions to practices, threats to privacy and security, and consequences of biases, misuse, and misinformation. However, opinion is split on whether ChatGPT’s use should be restricted or legislated. Drawing on these contributions, the article identifies questions requiring further research across three thematic areas: knowledge, transparency, and ethics; digital transformation of organisations and societies; and teaching, learning, and scholarly research. The avenues for further research include: identifying skills, resources, and capabilities needed to handle generative AI; examining biases of generative AI attributable to training datasets and processes; exploring business and societal contexts best suited for generative AI implementation; determining optimal combinations of human and generative AI for various tasks; identifying ways to assess accuracy of text produced by generative AI; and uncovering the ethical and legal issues in using generative AI across different contexts

    “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy

    Get PDF
    Transformative artificially intelligent tools, such as ChatGPT, designed to generate sophisticated text indistinguishable from that produced by a human, are applicable across a wide range of contexts. The technology presents opportunities as well as, often ethical and legal, challenges, and has the potential for both positive and negative impacts for organisations, society, and individuals. Offering multi-disciplinary insight into some of these, this article brings together 43 contributions from experts in fields such as computer science, marketing, information systems, education, policy, hospitality and tourism, management, publishing, and nursing. The contributors acknowledge ChatGPT's capabilities to enhance productivity and suggest that it is likely to offer significant gains in the banking, hospitality and tourism, and information technology industries, and enhance business activities, such as management and marketing. Nevertheless, they also consider its limitations, disruptions to practices, threats to privacy and security, and consequences of biases, misuse, and misinformation. However, opinion is split on whether ChatGPT's use should be restricted or legislated. Drawing on these contributions, the article identifies questions requiring further research across three thematic areas: knowledge, transparency, and ethics; digital transformation of organisations and societies; and teaching, learning, and scholarly research. The avenues for further research include: identifying skills, resources, and capabilities needed to handle generative AI; examining biases of generative AI attributable to training datasets and processes; exploring business and societal contexts best suited for generative AI implementation; determining optimal combinations of human and generative AI for various tasks; identifying ways to assess accuracy of text produced by generative AI; and uncovering the ethical and legal issues in using generative AI across different contexts

    “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy

    Get PDF
    Transformative artificially intelligent tools, such as ChatGPT, designed to generate sophisticated text indistinguishable from that produced by a human, are applicable across a wide range of contexts. The technology presents opportunities as well as, often ethical and legal, challenges, and has the potential for both positive and negative impacts for organisations, society, and individuals. Offering multi-disciplinary insight into some of these, this article brings together 43 contributions from experts in fields such as computer science, marketing, information systems, education, policy, hospitality and tourism, management, publishing, and nursing. The contributors acknowledge ChatGPT’s capabilities to enhance productivity and suggest that it is likely to offer significant gains in the banking, hospitality and tourism, and information technology industries, and enhance business activities, such as management and marketing. Nevertheless, they also consider its limitations, disruptions to practices, threats to privacy and security, and consequences of biases, misuse, and misinformation. However, opinion is split on whether ChatGPT’s use should be restricted or legislated. Drawing on these contributions, the article identifies questions requiring further research across three thematic areas: knowledge, transparency, and ethics; digital transformation of organisations and societies; and teaching, learning, and scholarly research. The avenues for further research include: identifying skills, resources, and capabilities needed to handle generative AI; examining biases of generative AI attributable to training datasets and processes; exploring business and societal contexts best suited for generative AI implementation; determining optimal combinations of human and generative AI for various tasks; identifying ways to assess accuracy of text produced by generative AI; and uncovering the ethical and legal issues in using generative AI across different contexts

    “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy

    Get PDF
    Transformative artificially intelligent tools, such as ChatGPT, designed to generate sophisticated text indistinguishable from that produced by a human, are applicable across a wide range of contexts. The technology presents opportunities as well as, often ethical and legal, challenges, and has the potential for both positive and negative impacts for organisations, society, and individuals. Offering multi-disciplinary insight into some of these, this article brings together 43 contributions from experts in fields such as computer science, marketing, information systems, education, policy, hospitality and tourism, management, publishing, and nursing. The contributors acknowledge ChatGPT’s capabilities to enhance productivity and suggest that it is likely to offer significant gains in the banking, hospitality and tourism, and information technology industries, and enhance business activities, such as management and marketing. Nevertheless, they also consider its limitations, disruptions to practices, threats to privacy and security, and consequences of biases, misuse, and misinformation. However, opinion is split on whether ChatGPT’s use should be restricted or legislated. Drawing on these contributions, the article identifies questions requiring further research across three thematic areas: knowledge, transparency, and ethics; digital transformation of organisations and societies; and teaching, learning, and scholarly research. The avenues for further research include: identifying skills, resources, and capabilities needed to handle generative AI; examining biases of generative AI attributable to training datasets and processes; exploring business and societal contexts best suited for generative AI implementation; determining optimal combinations of human and generative AI for various tasks; identifying ways to assess accuracy of text produced by generative AI; and uncovering the ethical and legal issues in using generative AI across different contexts
    corecore