54,223 research outputs found
Detecting Action Items in Meetings
Abstract. We present a method for detecting action items in spontaneous meeting speech. Using a supervised approach incorporating prosodic, lexical and structural features, we can classify such items with a high degree of accuracy. We also examine how well various feature subclasses can perform this task on their own.
Use-cases on evolution
This report presents a set of use cases for evolution and reactivity for data in the Web and
Semantic Web. This set is organized around three different case study scenarios, each of them
is related to one of the three different areas of application within Rewerse. Namely, the scenarios
are: âThe Rewerse Information System and Portalâ, closely related to the work of A3
â Personalised Information Systems; âOrganizing Travelsâ, that may be related to the work
of A1 â Events, Time, and Locations; âUpdates and evolution in bioinformatics data sourcesâ
related to the work of A2 â Towards a Bioinformatics Web
Emergent Leadership Detection Across Datasets
Automatic detection of emergent leaders in small groups from nonverbal
behaviour is a growing research topic in social signal processing but existing
methods were evaluated on single datasets -- an unrealistic assumption for
real-world applications in which systems are required to also work in settings
unseen at training time. It therefore remains unclear whether current methods
for emergent leadership detection generalise to similar but new settings and to
which extent. To overcome this limitation, we are the first to study a
cross-dataset evaluation setting for the emergent leadership detection task. We
provide evaluations for within- and cross-dataset prediction using two current
datasets (PAVIS and MPIIGroupInteraction), as well as an investigation on the
robustness of commonly used feature channels (visual focus of attention, body
pose, facial action units, speaking activity) and online prediction in the
cross-dataset setting. Our evaluations show that using pose and eye contact
based features, cross-dataset prediction is possible with an accuracy of 0.68,
as such providing another important piece of the puzzle towards emergent
leadership detection in the real world.Comment: 5 pages, 3 figure
Conducting Effective Meetings
{Excerpt} Meetings are essential in any form of human enterprise. These days, they are so common that turning the resources they tie up into sustained results is a priority in high-performance organizations. This is because they are potential time wasters: the other persons present may not respect their own time as much as you have come to respect yours, and it is therefore unlikely that they will mind wasting your time. Generic actions before, during, and after can make meetings more effective
Overview of VideoCLEF 2009: New perspectives on speech-based multimedia content enrichment
VideoCLEF 2009 offered three tasks related to enriching video content for improved multimedia access in a multilingual environment. For each task, video data (Dutch-language television, predominantly documentaries) accompanied by speech recognition transcripts were provided.
The Subject Classification Task involved automatic tagging of videos with subject theme labels. The best performance was achieved by approaching subject tagging as an information retrieval task and using both speech recognition transcripts and archival metadata. Alternatively, classifiers were trained using either the training data provided or data collected from Wikipedia or via general Web search. The Affect Task involved detecting narrative peaks, defined as points where viewers perceive heightened dramatic tension. The task was carried out on the âBeeldenstormâ collection containing 45 short-form documentaries on the visual arts. The best runs exploited affective vocabulary and audience directed speech. Other approaches included using topic changes, elevated speaking pitch, increased speaking intensity and radical visual changes. The Linking Task, also called âFinding Related Resources Across Languages,â involved linking video to material on the same subject in a different language.
Participants were provided with a list of multimedia anchors (short video segments) in the Dutch-language âBeeldenstormâ collection and were expected to return target pages drawn from English-language Wikipedia. The best performing methods used the transcript of the
speech spoken during the multimedia anchor to build a query to search an index of the Dutch language Wikipedia. The Dutch Wikipedia pages returned were used to identify related English pages. Participants also experimented with pseudo-relevance feedback, query translation and methods that targeted proper names
Learning About Meetings
Most people participate in meetings almost every day, multiple times a day.
The study of meetings is important, but also challenging, as it requires an
understanding of social signals and complex interpersonal dynamics. Our aim
this work is to use a data-driven approach to the science of meetings. We
provide tentative evidence that: i) it is possible to automatically detect when
during the meeting a key decision is taking place, from analyzing only the
local dialogue acts, ii) there are common patterns in the way social dialogue
acts are interspersed throughout a meeting, iii) at the time key decisions are
made, the amount of time left in the meeting can be predicted from the amount
of time that has passed, iv) it is often possible to predict whether a proposal
during a meeting will be accepted or rejected based entirely on the language
(the set of persuasive words) used by the speaker
Recommended from our members
A collaborative-project memory tool for participatory planning
Technology is more and more providing planners and designer with tools and methods to collect and communicate spatial data and assist spatial analysis. When we think about new technologies supporting planning we mainly think about GIS, urban modelling, simulation models and virtual reality. But many other challenges to the planning practice need for tools to support and improve planning activities. In this paper we discuss the need of new tools to support knowledge representation and knowledge sharing in participatory planning processes. The paper describes the use of a hypermedia and sensemaking tool (Compendium) to structure the knowledge produced in a real participatory planning process. In the present application Compendium has been used not for real-time capturing but for a post-hoc analysis of a real participatory planning experience.
Compendium has been used to represent and reconstruct the group memory of consultation meetings in order to allow both the planning team and the citizens to navigate into the contents of those meetings. Moreover the paper describes the main features and potential of the use of Compendium in Participatory Planning domain, and it describes the results of the group memory reconstruction. Finally the case study opens reflections on the need of new planning technologies supporting participatory knowledge generation, representation and management
Augmenting human memory using personal lifelogs
Memory is a key human facility to support life activities, including social interactions, life management and problem solving. Unfortunately, our memory is not perfect. Normal individuals will have occasional memory problems which can be frustrating, while those with memory impairments can often experience a greatly reduced quality of life. Augmenting memory has the potential to make normal individuals more effective, and those with significant memory problems to have a higher general quality of life. Current technologies are now making it possible to automatically capture and store daily life experiences over an extended period, potentially even over a lifetime. This type of data collection, often referred to as a personal life log (PLL), can include data such as continuously captured pictures or videos from a first person perspective, scanned copies of archival material such as books, electronic documents read or created, and emails and SMS messages sent and received, along with context data of time of capture and access and location via GPS sensors.
PLLs offer the potential for memory augmentation. Existing work on PLLs has focused on the technologies of data capture and retrieval, but little work has been done to explore how these captured data and retrieval techniques can be applied to actual use by normal people in supporting their memory. In this paper, we explore the needs for augmenting human memory from normal people based on the psychology literature on mechanisms about memory problems, and discuss the possible functions that PLLs can provide to support these memory augmentation needs. Based on this, we also suggest guidelines for data for capture, retrieval needs and computer-based interface design. Finally we introduce our work-in-process prototype PLL search system in the iCLIPS project to give an example of augmenting human memory with PLLs and computer based interfaces
- âŠ