10,420 research outputs found
Commonplaces in risk talk: Face threats and forms of interaction.
Talk about risk is problematic for interaction; it can involve the speaker or hearer saying things that threaten participants' 'face', the ways they want themselves to be seen by others. One way of dealing with these threats to face, and to keep the conversation going, is the use of commonplaces. Commonplaces, generally applicable and generally known arguments, play an important role in interaction, invoking shared, taken-for-granted perspectives embedded in familiar roles and everyday practices. They are similar to some of the frames discussed in risk communication, but they focus our attention on rhetoric and interaction rather than cognition. In this paper, I show how commonplaces are used in focus group discussions of public choices involving dangers to life or health. They tend to be used in response to dilemmas, when a speaker is put on the spot, and they tend to lead to other commonplaces. Analysis of commonplaces supports those who argue that studies of public perception of risks and programmes of communication about risks need to be sensitive to the personal interactions, rhetorical strategies, and cultural embeddedness of any risk talk
Effectiveness of Corporate Social Media Activities to Increase Relational Outcomes
This study applies social media analytics to investigate the impact of different corporate social media activities on user word of mouth and attitudinal loyalty. We conduct a multilevel analysis of approximately 5 million tweets regarding the main Twitter accounts of 28 large global companies. We empirically identify different social media activities in terms of social media management strategies (using social media management tools or the web-frontend client), account types (broadcasting or receiving information), and communicative approaches (conversational or disseminative). We find positive effects of social media management tools, broadcasting accounts, and conversational communication on public perception
CEPS Task Force on Artificial Intelligence and Cybersecurity Technology, Governance and Policy Challenges Task Force Evaluation of the HLEG Trustworthy AI Assessment List (Pilot Version). CEPS Task Force Report 22 January 2020
The Centre for European Policy Studies launched a Task Force on Artificial Intelligence (AI) and
Cybersecurity in September 2019. The goal of this Task Force is to bring attention to the market,
technical, ethical and governance challenges posed by the intersection of AI and cybersecurity,
focusing both on AI for cybersecurity but also cybersecurity for AI. The Task Force is multi-stakeholder
by design and composed of academics, industry players from various sectors, policymakers and civil
society.
The Task Force is currently discussing issues such as the state and evolution of the application of AI
in cybersecurity and cybersecurity for AI; the debate on the role that AI could play in the dynamics
between cyber attackers and defenders; the increasing need for sharing information on threats and
how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and
possible EU policy measures to ease the adoption of AI in cybersecurity in Europe.
As part of such activities, this report aims at assessing the High-Level Expert Group (HLEG) on AI Ethics
Guidelines for Trustworthy AI, presented on April 8, 2019. In particular, this report analyses and
makes suggestions on the Trustworthy AI Assessment List (Pilot version), a non-exhaustive list aimed
at helping the public and the private sector in operationalising Trustworthy AI. The list is composed
of 131 items that are supposed to guide AI designers and developers throughout the process of
design, development, and deployment of AI, although not intended as guidance to ensure
compliance with the applicable laws. The list is in its piloting phase and is currently undergoing a
revision that will be finalised in early 2020.
This report would like to contribute to this revision by addressing in particular the interplay between
AI and cybersecurity. This evaluation has been made according to specific criteria: whether and how
the items of the Assessment List refer to existing legislation (e.g. GDPR, EU Charter of Fundamental
Rights); whether they refer to moral principles (but not laws); whether they consider that AI attacks
are fundamentally different from traditional cyberattacks; whether they are compatible with
different risk levels; whether they are flexible enough in terms of clear/easy measurement,
implementation by AI developers and SMEs; and overall, whether they are likely to create obstacles
for the industry.
The HLEG is a diverse group, with more than 50 members representing different stakeholders, such
as think tanks, academia, EU Agencies, civil society, and industry, who were given the difficult task of
producing a simple checklist for a complex issue. The public engagement exercise looks successful
overall in that more than 450 stakeholders have signed in and are contributing to the process.
The next sections of this report present the items listed by the HLEG followed by the analysis and
suggestions raised by the Task Force (see list of the members of the Task Force in Annex 1)
Achieving mutual understanding in intercultural project partnerships : co-operation, self-orientation, and fragility
Communication depends on cooperation in at least the following way: In order to be successful, communicative behavior needs to be adjusted to the general world knowledge, abilities, and interests of the hearer, and the hearer's success in figuring out the message and responding to it needs to be informed by assumptions about the communicator's informative intentions, personal goals, and communicative abilities. In other words, interlocutors cooperate by coordinating their actions in order to fulfill their communicative intentions. This minimal assumption about cooperativeness must in one way or another be built into the foundations of any plausible inferential model of human communication. However, the communication process is also influenced to a greater or lesser extent, whether intentionally and consciously or unintentionally and unconsciously, by the participants' orientation toward, or preoccupation with, their own concerns, so their behavior may easily fall short of being as cooperative as is required for achieving successful communication
Spoken Language Interaction with Robots: Recommendations for Future Research
With robotics rapidly advancing, more effective humanārobot interaction is increasingly needed to realize the full potential of robots for society. While spoken language must be part of the solution, our ability to provide spoken language interaction capabilities is still very limited. In this article, based on the report of an interdisciplinary workshop convened by the National Science Foundation, we identify key scientific and engineering advances needed to enable effective spoken language interaction with robotics. We make 25 recommendations, involving eight general themes: putting human needs first, better modeling the social and interactive aspects of language, improving robustness, creating new methods for rapid adaptation, better integrating speech and language with other communication modalities, giving speech and language components access to rich representations of the robotās current knowledge and state, making all components operate in real time, and improving research infrastructure and resources. Research and development that prioritizes these topics will, we believe, provide a solid foundation for the creation of speech-capable robots that are easy and effective for humans to work with
New Methods, Current Trends and Software Infrastructure for NLP
The increasing use of `new methods' in NLP, which the NeMLaP conference
series exemplifies, occurs in the context of a wider shift in the nature and
concerns of the discipline. This paper begins with a short review of this
context and significant trends in the field. The review motivates and leads to
a set of requirements for support software of general utility for NLP research
and development workers. A freely-available system designed to meet these
requirements is described (called GATE - a General Architecture for Text
Engineering). Information Extraction (IE), in the sense defined by the Message
Understanding Conferences (ARPA \cite{Arp95}), is an NLP application in which
many of the new methods have found a home (Hobbs \cite{Hob93}; Jacobs ed.
\cite{Jac92}). An IE system based on GATE is also available for research
purposes, and this is described. Lastly we review related work.Comment: 12 pages, LaTeX, uses nemlap.sty (included
Recommended from our members
The Challenge of Spoken Language Systems: Research Directions for the Nineties
A spoken language system combines speech recognition, natural language processing and human interface technology. It functions by recognizing the person's words, interpreting the sequence of words to obtain a meaning in terms of the application, and providing an appropriate response back to the user. Potential applications of spoken language systems range from simple tasks, such as retrieving information from an existing database (traffic reports, airline schedules), to interactive problem solving tasks involving complex planning and reasoning (travel planning, traffic routing), to support for multilingual interactions. We examine eight key areas in which basic research is needed to produce spoken language systems: (1) robust speech recognition; (2) automatic training and adaptation; (3) spontaneous speech; (4) dialogue models; (5) natural language response generation; (6) speech synthesis and speech generation; (7) multilingual systems; and (8) interactive multimodal systems. In each area, we identify key research challenges, the infrastructure needed to support research, and the expected benefits. We conclude by reviewing the need for multidisciplinary research, for development of shared corpora and related resources, for computational support and far rapid communication among researchers. The successful development of this technology will increase accessibility of computers to a wide range of users, will facilitate multinational communication and trade, and will create new research specialties and jobs in this rapidly expanding area
- ā¦