9,636 research outputs found
Chatbots as a novel access method for government open data
IIn this discussion paper, we propose to employ chatbots as a user-friendly interface for open data published by organizations, specifically focusing on public administrations. Open data are especially useful in e-Government initiatives but their exploitation is currently hampered to end users by the lack of user-friendly access methods. On the other hand, current UX in social networks have made people used to chatting. Building on cognitive technologies, we prototyped a chatbot on top of the OpenCantieri dataset published by the Italian Ministero delle Infrastrutture e Trasporti, and we argue that such a model can be extended as a generally available access method to open data
Chatpal Chatbot dialogue data set
The scripts used in the ChatPal chatbot are freely available as an output from the ChatPal project. The datasets contain the chatbot utterances in English, Swedish, Finnish and Scottish Gaelic. Any replies collected from users through the ChatPal chatbot are not included in these data. Datasets are available in csv format and contain Unicode character encodings (UTF-8). Disclaimer: The datasets are open access, should be used appropriately and can be repurposed. However, the ChatPal project team are not responsible for how you chose to use the data or repurpose the content
How "open" are the conversations with open-domain chatbots? A proposal for Speech Event based evaluation
Open-domain chatbots are supposed to converse freely with humans without
being restricted to a topic, task or domain. However, the boundaries and/or
contents of open-domain conversations are not clear. To clarify the boundaries
of "openness", we conduct two studies: First, we classify the types of "speech
events" encountered in a chatbot evaluation data set (i.e., Meena by Google)
and find that these conversations mainly cover the "small talk" category and
exclude the other speech event categories encountered in real life human-human
communication. Second, we conduct a small-scale pilot study to generate online
conversations covering a wider range of speech event categories between two
humans vs. a human and a state-of-the-art chatbot (i.e., Blender by Facebook).
A human evaluation of these generated conversations indicates a preference for
human-human conversations, since the human-chatbot conversations lack coherence
in most speech event categories. Based on these results, we suggest (a) using
the term "small talk" instead of "open-domain" for the current chatbots which
are not that "open" in terms of conversational abilities yet, and (b) revising
the evaluation methods to test the chatbot conversations against other speech
events
Designing Chatbots for Crises: A Case Study Contrasting Potential and Reality
Chatbots are becoming ubiquitous technologies, and their popularity and adoption are rapidly spreading. The potential of chatbots in engaging people with digital services is fully recognised. However, the reputation of this technology with regards to usefulness and real impact remains rather questionable. Studies that evaluate how people perceive and utilise chatbots are generally lacking. During the last Kenyan elections, we deployed a chatbot on Facebook Messenger to help people submit reports of violence and misconduct experienced in the polling stations. Even though the chatbot was visited by more than 3,000 times, there was a clear mismatch between the users’ perception of the technology and its design. In this paper, we analyse the user interactions and content generated through this application and discuss the challenges and directions for designing more effective chatbots
- …