1,165 research outputs found

    Discovering activity patterns in office environment using a network of low-resolution visual sensors

    No full text
    Understanding activity patterns in office environments is important in order to increase workers’ comfort and productivity. This paper proposes an automated system for discovering activity patterns of multiple persons in a work environment using a network of cheap low-resolution visual sensors (900 pixels). Firstly, the users’ locations are obtained from a robust people tracker based on recursive maximum likelihood principles. Secondly, based on the users’ mobility tracks, the high density positions are found using a bivariate kernel density estimation. Then, the hotspots are detected using a confidence region estimation. Thirdly, we analyze the individual’s tracks to find the starting and ending hotspots. The starting and ending hotspots form an observation sequence, where the user’s presence and absence are detected using three powerful Probabilistic Graphical Models (PGMs). We describe two approaches to identify the user’s status: a single model approach and a two-model mining approach. We evaluate both approaches on video sequences captured in a real work environment, where the persons’ daily routines are recorded over 5 months. We show how the second approach achieves a better performance than the first approach. Routines dominating the entire group’s activities are identified with a methodology based on the Latent Dirichlet Allocation topic model. We also detect routines which are characteristic of persons. More specifically, we perform various analysis to determine regions with high variations, which may correspond to specific events

    A framework for realistic 3D tele-immersion

    Get PDF
    Meeting, socializing and conversing online with a group of people using teleconferencing systems is still quite differ- ent from the experience of meeting face to face. We are abruptly aware that we are online and that the people we are engaging with are not in close proximity. Analogous to how talking on the telephone does not replicate the experi- ence of talking in person. Several causes for these differences have been identified and we propose inspiring and innova- tive solutions to these hurdles in attempt to provide a more realistic, believable and engaging online conversational expe- rience. We present the distributed and scalable framework REVERIE that provides a balanced mix of these solutions. Applications build on top of the REVERIE framework will be able to provide interactive, immersive, photo-realistic ex- periences to a multitude of users that for them will feel much more similar to having face to face meetings than the expe- rience offered by conventional teleconferencing systems

    Self-Supervised Vision-Based Detection of the Active Speaker as Support for Socially-Aware Language Acquisition

    Full text link
    This paper presents a self-supervised method for visual detection of the active speaker in a multi-person spoken interaction scenario. Active speaker detection is a fundamental prerequisite for any artificial cognitive system attempting to acquire language in social settings. The proposed method is intended to complement the acoustic detection of the active speaker, thus improving the system robustness in noisy conditions. The method can detect an arbitrary number of possibly overlapping active speakers based exclusively on visual information about their face. Furthermore, the method does not rely on external annotations, thus complying with cognitive development. Instead, the method uses information from the auditory modality to support learning in the visual domain. This paper reports an extensive evaluation of the proposed method using a large multi-person face-to-face interaction dataset. The results show good performance in a speaker dependent setting. However, in a speaker independent setting the proposed method yields a significantly lower performance. We believe that the proposed method represents an essential component of any artificial cognitive system or robotic platform engaging in social interactions.Comment: 10 pages, IEEE Transactions on Cognitive and Developmental System

    Human Body Posture Recognition Approaches: A Review

    Get PDF
    Human body posture recognition has become the focus of many researchers in recent years. Recognition of body posture is used in various applications, including surveillance, security, and health monitoring. However, these systems that determine the body’s posture through video clips, images, or data from sensors have many challenges when used in the real world. This paper provides an important review of how most essential ‎ hardware technologies are ‎used in posture recognition systems‎. These systems capture and collect datasets through ‎accelerometer sensors or computer vision. In addition, this paper presents a comparison ‎study with state-of-the-art in terms of accuracy. We also present the advantages and ‎limitations of each system and suggest promising future ideas that can increase the ‎efficiency of the existing posture recognition system. Finally, the most common datasets ‎applied in these systems are described in detail. It aims to be a resource to help choose one of the methods in recognizing the posture of the human body and the techniques that suit each method. It analyzes more than 80 papers between 2015 and 202

    Understanding collaboration in Global Software Engineering (GSE) teams with the use of sensors: introducing a multi-sensor setting for observing social and human aspects in project management

    Get PDF
    This paper discusses on-going research in the ways Global Software Engineering (GSE) teams collaborate for a range of software development tasks. The paper focuses on providing the means for observing and understanding GSE team member collaboration including team coordination and member communication. Initially the paper provides the background on social and human issues relating to GSE collaboration. Next the paper describes a pilot study involving a simulation of virtual GSE teams working together with the use of asynchronous and synchronous communication over a virtual learning environment. The study considered the use of multiple data collection techniques recordings of SCRUM meetings, design and implementation tasks. Next, the paper discusses the use of a multi-sensor for observing human and social aspects of project management in GSE teams. The scope of the study is to provide the means for gathering data regarding GSE team coordination for project managers including member emotions, participation pattern in team discussions and potentially stress levels

    Understanding collaboration in Global Software Engineering (GSE) teams with the use of sensors: introducing a multi-sensor setting for observing social and human aspects in project management

    Get PDF
    This paper discusses on-going research in the ways Global Software Engineering (GSE) teams collaborate for a range of software development tasks. The paper focuses on providing the means for observing and understanding GSE team member collaboration including team coordination and member communication. Initially the paper provides the background on social and human issues relating to GSE collaboration. Next the paper describes a pilot study involving a simulation of virtual GSE teams working together with the use of asynchronous and synchronous communication over a virtual learning environment. The study considered the use of multiple data collection techniques recordings of SCRUM meetings, design and implementation tasks. Next, the paper discusses the use of a multi-sensor for observing human and social aspects of project management in GSE teams. The scope of the study is to provide the means for gathering data regarding GSE team coordination for project managers including member emotions, participation pattern in team discussions and potentially stress levels

    Intuitive human-device interaction for video control and feedback

    Get PDF

    A Survey of Applications and Human Motion Recognition with Microsoft Kinect

    Get PDF
    Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this paper, we present, a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection and human pose estimation

    Investigating the role of biometrics in education – the use of sensor data in collaborative learning

    Get PDF
    This paper provides a detailed description of how a smart spaces laboratory has been used for assessing learners’ performance in various educational contexts. The paper shares the authors’ experiences from using sensor-generated data in a number of learning scenarios. In particular the paper describes how a smart learning environment is created with the use of a range of sensors measuring key data from individual learners including (i) heartbeat, (ii) emotion detection, (iii) sweat levels, (iv) voice fluctuations and (v) duration and pattern of contribution via voice recognition. The paper also explains how biometrics are used to assess learner’ contribution in certain activities but also to evaluate collaborative learning in student groups. Finally the paper instigates research in the role of using visualization of biometrics as a medium for supporting assessment, facilitating learning processes and enhancing learning experiences. Examples of how learning analytics are created based on biometrics are also provided, resulting from a number of pilot studies that have taken place over the past couple of years
    • 

    corecore