465 research outputs found
“So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
Transformative artificially intelligent tools, such as ChatGPT, designed to generate sophisticated text indistinguishable from that produced by a human, are applicable across a wide range of contexts. The technology presents opportunities as well as, often ethical and legal, challenges, and has the potential for both positive and negative impacts for organisations, society, and individuals. Offering multi-disciplinary insight into some of these, this article brings together 43 contributions from experts in fields such as computer science, marketing, information systems, education, policy, hospitality and tourism, management, publishing, and nursing. The contributors acknowledge ChatGPT’s capabilities to enhance productivity and suggest that it is likely to offer significant gains in the banking, hospitality and tourism, and information technology industries, and enhance business activities, such as management and marketing. Nevertheless, they also consider its limitations, disruptions to practices, threats to privacy and security, and consequences of biases, misuse, and misinformation. However, opinion is split on whether ChatGPT’s use should be restricted or legislated. Drawing on these contributions, the article identifies questions requiring further research across three thematic areas: knowledge, transparency, and ethics; digital transformation of organisations and societies; and teaching, learning, and scholarly research. The avenues for further research include: identifying skills, resources, and capabilities needed to handle generative AI; examining biases of generative AI attributable to training datasets and processes; exploring business and societal contexts best suited for generative AI implementation; determining optimal combinations of human and generative AI for various tasks; identifying ways to assess accuracy of text produced by generative AI; and uncovering the ethical and legal issues in using generative AI across different contexts
Computer Vision and Architectural History at Eye Level:Mixed Methods for Linking Research in the Humanities and in Information Technology
Information on the history of architecture is embedded in our daily surroundings, in vernacular and heritage buildings and in physical objects, photographs and plans. Historians study these tangible and intangible artefacts and the communities that built and used them. Thus valuableinsights are gained into the past and the present as they also provide a foundation for designing the future. Given that our understanding of the past is limited by the inadequate availability of data, the article demonstrates that advanced computer tools can help gain more and well-linked data from the past. Computer vision can make a decisive contribution to the identification of image content in historical photographs. This application is particularly interesting for architectural history, where visual sources play an essential role in understanding the built environment of the past, yet lack of reliable metadata often hinders the use of materials. The automated recognition contributes to making a variety of image sources usable forresearch.<br/
Attractive User Interface Elements : Measurement and prediction
The years 2020–2021 mark a time when the global population was encountered by a world-wide pandemic. The lockdown had devastating consequences on many industries and individuals, and the emergence of global economies into the postpandemic recovery has only just begun. However, as people adapted to the pandemic by embracing a mobile lifestyle, industries that employed graphical user interfaces as a means of human-computer interaction saw tremendous growth, exceeding everyone’s expectations despite predictions of a slowdown. One example is the mobile apps and games markets, touted as the fastest growing marketplaces worldwide. At the moment, the impact of the mobile economy is undeniably high, and it does not show signs of stalling. As we look ahead and start the 'return to physical', we can see new mobile habits take shape in our everyday life.
Today, people conduct most daily functions via graphical user interfaces, due to the increasing technology-mediated nature of all human praxis, such as socializing, work, education, and entertainment. The interaction is realized on various different platforms, be they on desktop, mobile devices, VR or (smart) TVs. Although user interfaces themselves are not novel, their role is more significant now than anyone could have imagined only a few decades ago. Attractive visual designs in user interfaces have proven to enhance many aspects concerning usability, sense of pleasure and trust, but evaluating aesthetics is challenging due to the subjective nature of user perception. Although several theories and measurement instruments have been developed in order to assess and design pleasing user interfaces, the measures remain scattered. Therefore, the aim of this dissertation is to expand knowledge on how the visual aesthetics of graphical user interfaces can be modelled, evaluated, and assessed.
Through four studies, this dissertation provides an overview of the state-of-theart in the literature of measurement instruments of visual aesthetics for graphical user interfaces. The dimensions of aesthetic perception that emerge in the context of user interface elements are also examined and introduced by developing a scale for measuring perceptions. As engaging and intuitive imagery has become one of the most valuable assets in today’s attention economy, the studies also observe individual user perceptions of different demographic groups and their relationships on aesthetic qualities to determine how they predict the success of graphical elements. The publications employ methodology ranging from a systematic literature review to sophisticated, quantitative statistical modelling methods to accurately identify and address each of the described phenomena by standardized means.
The findings provided by this dissertation greatly contribute to existing literature on the measurement and prediction of visually pleasing graphical user interfaces both practically and theoretically. Advancing knowledge and guidelines in this fast-paced field requires assessment from a wide perspective, including the observation of prior work, and the adaptation of measures to the modern economy by highlighting user behavior and preferences. This is particularly important in the milieu of the increasingly growing prevalence of graphical user interfaces that will continue shaping our lives in ways unimaginable
AI Hype: Public Relations and AI's doomsday machine
This chapter broadens current professional debates by highlighting a different but vital relationship between the PR profession and AI, one in which PR professionals – acting as AI cheerleaders – are deeply implicated in generating AI hype. My discussion explores recent market studies research on disruption and hype cycles, before delving into the latest, somewhat disturbing phase in AI’s hype cycle, in which end-of-the-world scenarios are invoked to stimulate a climate of fear around AI. The chapter concludes by exploring some ethical concerns with promoting AI and automation as humanity’s inevitable future
Northeastern Illinois University, Academic Catalog 2023-2024
https://neiudc.neiu.edu/catalogs/1064/thumbnail.jp
Doing Things with Words: The New Consequences of Writing in the Age of AI
Exploring the entanglement between artificial intelligence (AI) and writing, this thesis asks, what does writing with AI do? And, how can this doing be made visible, since the consequences of information and communication technologies (ICTs) are so often opaque? To propose one set of answers to the questions above, I begin by working with Google Smart Compose, the word-prediction AI Google launched to more than a billion global users in 2018, by way of a novel method I call AI interaction experiments. In these experiments, I transcribe texts into Gmail and Google Docs, carefully documenting Smart Compose’s interventions and output. Wedding these experiments to existing scholarship, I argue that writing with AI does three things: it engages writers in asymmetrical economic relations with Big Tech; it entangles unwitting writers in climate crisis by virtue of the vast resources, as Bender et al. (2021), Crawford (2021), and Strubell et al. (2019) have pointed out, required to train and sustain AI models; and it perpetuates linguistic racism, further embedding harmful politics of race and representation in everyday life. In making these arguments, my purpose is to intervene in normative discourses surrounding technology, exposing hard-to-see consequences so that we—people in the academy, critical media scholars, educators, and especially those of us in dominant groups— may envision better futures. Toward both exposure and reimagining, my dissertation’s primary contributions are research-creational work. Research-creational interventions accompany each of the three major chapters of this work, drawing attention to the economic, climate, and race relations that word-prediction AI conceals and to the otherwise opaque premises on which it rests. The broader wager of my dissertation is that what technologies do and what they are is inseparable: the relations a technology enacts must be exposed, and they must necessarily figure into how we understand the technology itself. Because writing with AI enacts particular economic, climate, and race relations, these relations must figure into our understanding of what it means to write with AI and, because of AI’s increasing entanglement with acts of writing, into our very understanding of what it means to write
Machine Learning Algorithm for the Scansion of Old Saxon Poetry
Several scholars designed tools to perform the automatic scansion of poetry in many languages, but none of these tools
deal with Old Saxon or Old English. This project aims to be a first attempt to create a tool for these languages. We
implemented a Bidirectional Long Short-Term Memory (BiLSTM) model to perform the automatic scansion of Old Saxon
and Old English poems. Since this model uses supervised learning, we manually annotated the Heliand manuscript, and
we used the resulting corpus as labeled dataset to train the model. The evaluation of the performance of the algorithm
reached a 97% for the accuracy and a 99% of weighted average for precision, recall and F1 Score. In addition, we tested
the model with some verses from the Old Saxon Genesis and some from The Battle of Brunanburh, and we observed that
the model predicted almost all Old Saxon metrical patterns correctly misclassified the majority of the Old English input
verses
Beyond Quantity: Research with Subsymbolic AI
How do artificial neural networks and other forms of artificial intelligence interfere with methods and practices in the sciences? Which interdisciplinary epistemological challenges arise when we think about the use of AI beyond its dependency on big data? Not only the natural sciences, but also the social sciences and the humanities seem to be increasingly affected by current approaches of subsymbolic AI, which master problems of quality (fuzziness, uncertainty) in a hitherto unknown way. But what are the conditions, implications, and effects of these (potential) epistemic transformations and how must research on AI be configured to address them adequately
Morris Catalog 2023-2025
This document serves as an official historical record for a specific period in time. The information found is subject to change without notice. Colleges and departments make changes to their degree requirements and course descriptions frequently. More information is available at catalogs.umn.edu.https://digitalcommons.morris.umn.edu/catalog/1034/thumbnail.jp
- …