68,721 research outputs found
Meetings and Meeting Modeling in Smart Environments
In this paper we survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real time or off-line. The research reported here forms part of the European 5th and 6th framework programme projects multi-modal meeting manager (M4) and augmented multi-party interaction (AMI). Both projects aim at building a smart meeting environment that is able to collect multimodal captures of the activities and discussions in a meeting room, with the aim to use this information as input to tools that allow real-time support, browsing, retrieval and summarization of meetings. Our aim is to research (semantic) representations of what takes place during meetings in order to allow generation, e.g. in virtual reality, of meeting activities (discussions, presentations, voting, etc.). Being able to do so also allows us to look at tools that provide support during a meeting and at tools that allow those not able to be physically present during a meeting to take part in a virtual way. This may lead to situations where the differences between real meeting participants, human-controlled virtual participants and (semi-) autonomous virtual participants disappear
Towards Simulating Humans in Augmented Multi-party Interaction
Human-computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in the European AMI research project
Mixed reality participants in smart meeting rooms and smart home enviroments
Human–computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in a virtual meeting room, we discuss how remote meeting participants can take part in meeting activities and they have some observations on translating research results to smart home environments
A best view selection in meetings through attention analysis using a multi-camera network
Human activity analysis is an essential task in ambient intelligence and computer vision. The main focus lies in the automatic analysis of ongoing activities from a multi-camera network. One possible application is meeting analysis which explores the dynamics in meetings using low-level data and inferring high-level activities. However, the detection of such activities is still very challenging due to the often corrupted or imprecise low-level data. In this paper, we present an approach to understand the dynamics in meetings using a multi-camera network, consisting of fixed ambient and portable close-up cameras. As a particular application we are aiming to find the most informative video stream, for example as a representative view for a remote participant. Our contribution is threefold: at first, we estimate the extrinsic parameters of the portable close-up cameras based on head positions. Secondly, we find common overlapping areas based on the consensus of people’s orientation. And thirdly, the most informative view for a remote participant is estimated using common overlapping areas. We evaluated our proposed approach and compared it to a motion estimation method. Experimental results show that we can reach an accuracy of 74% compared to manually selected views
Fundamental structures of dynamic social networks
Social systems are in a constant state of flux with dynamics spanning from
minute-by-minute changes to patterns present on the timescale of years.
Accurate models of social dynamics are important for understanding spreading of
influence or diseases, formation of friendships, and the productivity of teams.
While there has been much progress on understanding complex networks over the
past decade, little is known about the regularities governing the
micro-dynamics of social networks. Here we explore the dynamic social network
of a densely-connected population of approximately 1000 individuals and their
interactions in the network of real-world person-to-person proximity measured
via Bluetooth, as well as their telecommunication networks, online social media
contacts, geo-location, and demographic data. These high-resolution data allow
us to observe social groups directly, rendering community detection
unnecessary. Starting from 5-minute time slices we uncover dynamic social
structures expressed on multiple timescales. On the hourly timescale, we find
that gatherings are fluid, with members coming and going, but organized via a
stable core of individuals. Each core represents a social context. Cores
exhibit a pattern of recurring meetings across weeks and months, each with
varying degrees of regularity. Taken together, these findings provide a
powerful simplification of the social network, where cores represent
fundamental structures expressed with strong temporal and spatial regularity.
Using this framework, we explore the complex interplay between social and
geospatial behavior, documenting how the formation of cores are preceded by
coordination behavior in the communication networks, and demonstrating that
social behavior can be predicted with high precision.Comment: Main Manuscript: 16 pages, 4 figures. Supplementary Information: 39
pages, 34 figure
Recommended from our members
A community-engaged infection prevention and control approach to Ebola.
The real missing link in Ebola control efforts to date may lie in the failure to apply core principles of health promotion: the early, active and sustained engagement of affected communities, their trusted leaders, networks and lay knowledge, to help inform what local control teams do, and how they may better do it, in partnership with communities. The predominant focus on viral transmission has inadvertently stigmatized and created fear-driven responses among affected individuals, families and communities. While rigorous adherence to standard infection prevention and control (IPC) precautions and safety standards for Ebola is critical, we may be more successful if we validate and combine local community knowledge and experiences with that of IPC medical teams. In an environment of trust, community partners can help us learn of modest adjustments that would not compromise safety but could improve community understanding of, and responses to, disease control protocol, so that it better reflects their 'community protocol' (local customs, beliefs, knowledge and practices) and concerns. Drawing on the experience of local experts in several African nations and of community-engaged health promotion leaders in the USA, Canada and WHO, we present an eight step model, from entering communities with cultural humility, though reciprocal learning and trust, multi-method communication, development of the joint protocol, to assessing progress and outcomes and building for sustainability. Using examples of changes that are culturally relevant yet maintain safety, we illustrate how often minor adjustments can help prevent and treat the most serious emerging infectious disease since HIV/AIDS
Machine Understanding of Human Behavior
A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior
- …