155 research outputs found
A study of cyber hate on Twitter with implications for social media governance strategies
This paper explores ways in which the harmful effects of cyber hate may be mitigated through mechanisms for enhancing the self-governance of new digital spaces. We report findings from a mixed methods study of responses to cyber hate posts, which aimed to: (i) understand how people interact in this context by undertaking qualitative interaction analysis and developing a statistical model to explain the volume of responses to cyber hate posted to Twitter, and (ii) explore use of machine learning techniques to assist in identifying
cyber hate counter-speech
From computer ethics to responsible research and innovation in ICT: The transition of reference discourses informing ethics-related research in information systems
The discourse concerning computer ethics qualifies as a reference discourse for ethics-related IS research. Theories, topics and approaches of computer ethics are reflected in IS. The paper argues that there is currently a broader development in the area of research governance, which is referred to as âresponsible research and innovationâ (RRI). RRI applied to information and communication technology (ICT) addresses some of the limitations of computer ethics and points toward a broader approach to the governance of science, technology and innovation. Taking this development into account will help IS increase its relevance and make optimal use of its established strengths
Refining Vision Videos
[Context and motivation] Complex software-based systems involve several
stakeholders, their activities and interactions with the system. Vision videos
are used during the early phases of a project to complement textual
representations. They visualize previously abstract visions of the product and
its use. By creating, elaborating, and discussing vision videos, stakeholders
and developers gain an improved shared understanding of how those abstract
visions could translate into concrete scenarios and requirements to which
individuals can relate. [Question/problem] In this paper, we investigate two
aspects of refining vision videos: (1) Refining the vision by providing
alternative answers to previously open issues about the system to be built. (2)
A refined understanding of the camera perspective in vision videos. The impact
of using a subjective (or "ego") perspective is compared to the usual
third-person perspective. [Methodology] We use shopping in rural areas as a
real-world application domain for refining vision videos. Both aspects of
refining vision videos were investigated in an experiment with 20 participants.
[Contribution] Subjects made a significant number of additional contributions
when they had received not only video or text but also both - even with very
short text and short video clips. Subjective video elements were rated as
positive. However, there was no significant preference for either subjective or
non-subjective videos in general.Comment: 15 pages, 25th International Working Conference on Requirements
Engineering: Foundation for Software Quality 201
âIt would be pretty immoral to choose a random algorithmâ: Opening up algorithmic interpretability and transparency
In recent years, significant concerns have arisen regarding the increasing pervasiveness of algorithms and the impact of automated decision-making in our lives. Particularly problematic is the lack of transparency surrounding the development of these algorithmic systems and their use. It is often suggested that in order to make algorithms more fair, they should be made more transparent; but exactly how this can be achieved remains unclear. This paper reports on empirical work conducted to open up algorithmic interpretability and transparency. We conducted discussion-based experiments centred around a limited resource allocation scenario which required participants to select their most and least preferred algorithms in a particular context. Our results revealed diversity in participant preferences but consistency in the ways that participants invoked normative concerns and the importance of context when accounting for their selections. These findings demonstrate the value in pursuing algorithmic interpretability and transparency whilst also highlighting the complexities surrounding their accomplishment
Distilling Privacy Requirements for Mobile Applications
As mobile computing applications have become commonplace, it is increasingly important for them to address end-usersâ privacy requirements. Privacy requirements depend on a number of contextual socio-cultural factors to which mobility adds another level of contextual variation. However, traditional requirements elicitation methods do not sufficiently account for contextual factors and therefore cannot be used effectively to represent and analyse the privacy requirements of mobile end users. On the other hand, methods that do investigate contextual factors tend to produce data that does not lend itself to the process of requirements extraction. To address this problem we have developed a Privacy Requirements Distillation approach that employs a problem analysis framework to extract and refine privacy requirements for mobile applications from raw data gathered through empirical studies involving end users. Our approach introduces privacy facets that capture patterns of privacy concerns which are matched against the raw data. We demonstrate and evaluate our approach using qualitative data from an empirical study of a mobile social networking application
Empowerment or Engagement? Digital Health Technologies for Mental Healthcare
We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using five key ethical principles for AI ethics (i.e. autonomy, beneficence, non-maleficence, justice, and explicability), which have their roots in the bioethical literature, in order to critically evaluate the role that digital health technologies will have in the future of digital healthcare
e-Research Infrastructure Development and Community Engagement
No abstract available
Six Human-Centered Artificial Intelligence Grand Challenges
Widespread adoption of artificial intelligence (AI) technologies is substantially affecting the human condition in ways that are not yet well understood. Negative unintended consequences abound including the perpetuation and exacerbation of societal inequalities and divisions via algorithmic decision making. We present six grand challenges for the scientific community to create AI technologies that are human-centered, that is, ethical, fair, and enhance the human condition. These grand challenges are the result of an international collaboration across academia, industry and government and represent the consensus views of a group of 26 experts in the field of human-centered artificial intelligence (HCAI). In essence, these challenges advocate for a human-centered approach to AI that (1) is centered in human well-being, (2) is designed responsibly, (3) respects privacy, (4) follows human-centered design principles, (5) is subject to appropriate governance and oversight, and (6) interacts with individuals while respecting humanâs cognitive capacities. We hope that these challenges and their associated research directions serve as a call for action to conduct research and development in AI that serves as a force multiplier towards more fair, equitable and sustainable societies
- âŠ