130 research outputs found
An Inequality Approach to Approximate Solutions of Set Optimization Problems in Real Linear Spaces
This paper explores new notions of approximate minimality in set optimization using a set approach. We propose characterizations of several approximate minimal elements of families of sets in real linear spaces by means of general functionals, which can be unified in an inequality approach. As particular cases, we investigate the use of the prominent Tammer–Weidner nonlinear scalarizing functionals, without assuming any topology, in our context. We also derive numerical methods to obtain approximate minimal elements of families of finitely many sets by means of our obtained results
Artificial intelligence versus Maya Angelou:Experimental evidence that people cannot differentiate AI-generated from human-written poetry
The release of openly available, robust natural language generation
algorithms (NLG) has spurred much public attention and debate. One reason lies
in the algorithms' purported ability to generate human-like text across various
domains. Empirical evidence using incentivized tasks to assess whether people
(a) can distinguish and (b) prefer algorithm-generated versus human-written
text is lacking. We conducted two experiments assessing behavioral reactions to
the state-of-the-art Natural Language Generation algorithm GPT-2 (Ntotal =
830). Using the identical starting lines of human poems, GPT-2 produced samples
of poems. From these samples, either a random poem was chosen
(Human-out-of-the-loop) or the best one was selected (Human-in-the-loop) and in
turn matched with a human-written poem. In a new incentivized version of the
Turing Test, participants failed to reliably detect the
algorithmically-generated poems in the Human-in-the-loop treatment, yet
succeeded in the Human-out-of-the-loop treatment. Further, people reveal a
slight aversion to algorithm-generated poetry, independent on whether
participants were informed about the algorithmic origin of the poem
(Transparency) or not (Opacity). We discuss what these results convey about the
performance of NLG algorithms to produce human-like text and propose
methodologies to study such learning algorithms in human-agent experimental
settings.Comment: Computers in Human Behavior 202
Why did the Panama Papers (not) shatter the world?:The relationship between Journalism and Corruption
Ethical Questions Raised by AI-Supported Mentoring in Higher Education
Mentoring is a highly personal and individual process, in which mentees take advantage of
expertise and experience to expand their knowledge and to achieve individual goals. The
emerging use of AI in mentoring processes in higher education not only necessitates the
adherence to applicable laws and regulations (e.g., relating to data protection and nondiscrimination)
but further requires a thorough understanding of ethical norms, guidelines,
and unresolved issues (e.g., integrity of data, safety, and security of systems, and
confidentiality, avoiding bias, insuring trust in and transparency of algorithms).
Mentoring in Higher Education requires one of the highest degrees of trust, openness,
and social–emotional support, as much is at the stake for mentees, especially their
academic attainment, career options, and future life choices. However, ethical
compromises seem to be common when digital systems are introduced, and the
underlying ethical questions in AI-supported mentoring are still insufficiently addressed
in research, development, and application. One of the challenges is to strive for privacy and
data economy on the one hand, while Big Data is the prerequisite of AI-supported
environments on the other hand. How can ethical norms and general guidelines of
AIED be respected in complex digital mentoring processes? This article strives to start
a discourse on the relevant ethical questions and in this way raise awareness for the ethical
development and use of future data-driven, AI-supported mentoring environments in
higher education
Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry
The Consequences of Participating in the Sharing Economy: A Transparency-Based Sharing Framework
The sharing economy is estimated to add hundreds of billions of dollars to the global economy and is rapidly growing. However, trust-based commercial sharing—the participation in for-profit peer-to-peer sharing-economy activity—has negative as well as positive consequences for both the interacting parties and uninvolved third parties. To share responsibly, one needs to be aware of the various consequences of sharing. We provide a comprehensive, preregistered, systematic literature review of the consequences of trust-based commercial sharing, identifying 93 empirical papers spanning regions, sectors, and scientific disciplines. Via in-depth coding of the empirical work, we provide an authoritative overview of the economic, social, and psychological consequences of trust-based commercial sharing for involved parties, including service providers, users, and third parties. Based on the aggregate insights, we identify the common denominators for the positive and negative consequences. Whereas a well-functioning infrastructure of payment, insurance, and communication enables the positive consequences, ambiguity about rules, roles, and regulations causes non-negligible negative consequences. To overcome these negative consequences and promote more responsible forms of sharing, we propose the transparency-based sharing framework. Based on the framework, we outline an agenda for future research and discuss emerging managerial implications that arise when trying to increase transparency without jeopardizing the potential of trust-based commercial sharing
Artificial intelligence as an anti-corruption tool (AI-ACT): Potentials and pitfalls for top-down and bottom-up approaches
- …