11,676 research outputs found

    Modelling potential movement in constrained travel environments using rough space-time prisms

    Get PDF
    The widespread adoption of location-aware technologies (LATs) has afforded analysts new opportunities for efficiently collecting trajectory data of moving individuals. These technologies enable measuring trajectories as a finite sample set of time-stamped locations. The uncertainty related to both finite sampling and measurement errors makes it often difficult to reconstruct and represent a trajectory followed by an individual in space-time. Time geography offers an interesting framework to deal with the potential path of an individual in between two sample locations. Although this potential path may be easily delineated for travels along networks, this will be less straightforward for more nonnetwork-constrained environments. Current models, however, have mostly concentrated on network environments on the one hand and do not account for the spatiotemporal uncertainties of input data on the other hand. This article simultaneously addresses both issues by developing a novel methodology to capture potential movement between uncertain space-time points in obstacle-constrained travel environments

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    SALSA: A Novel Dataset for Multimodal Group Behavior Analysis

    Get PDF
    Studying free-standing conversational groups (FCGs) in unstructured social settings (e.g., cocktail party ) is gratifying due to the wealth of information available at the group (mining social networks) and individual (recognizing native behavioral and personality traits) levels. However, analyzing social scenes involving FCGs is also highly challenging due to the difficulty in extracting behavioral cues such as target locations, their speaking activity and head/body pose due to crowdedness and presence of extreme occlusions. To this end, we propose SALSA, a novel dataset facilitating multimodal and Synergetic sociAL Scene Analysis, and make two main contributions to research on automated social interaction analysis: (1) SALSA records social interactions among 18 participants in a natural, indoor environment for over 60 minutes, under the poster presentation and cocktail party contexts presenting difficulties in the form of low-resolution images, lighting variations, numerous occlusions, reverberations and interfering sound sources; (2) To alleviate these problems we facilitate multimodal analysis by recording the social interplay using four static surveillance cameras and sociometric badges worn by each participant, comprising the microphone, accelerometer, bluetooth and infrared sensors. In addition to raw data, we also provide annotations concerning individuals' personality as well as their position, head, body orientation and F-formation information over the entire event duration. Through extensive experiments with state-of-the-art approaches, we show (a) the limitations of current methods and (b) how the recorded multiple cues synergetically aid automatic analysis of social interactions. SALSA is available at http://tev.fbk.eu/salsa.Comment: 14 pages, 11 figure

    Constructing a gazebo: supporting teamwork in a tightly coupled, distributed task in virtual reality

    Get PDF
    Many tasks require teamwork. Team members may work concurrently, but there must be some occasions of coming together. Collaborative virtual environments (CVEs) allow distributed teams to come together across distance to share a task. Studies of CVE systems have tended to focus on the sense of presence or copresence with other people. They have avoided studying close interaction between us-ers, such as the shared manipulation of objects, because CVEs suffer from inherent network delays and often have cumbersome user interfaces. Little is known about the ef-fectiveness of collaboration in tasks requiring various forms of object sharing and, in particular, the concurrent manipu-lation of objects. This paper investigates the effectiveness of supporting teamwork among a geographically distributed group in a task that requires the shared manipulation of objects. To complete the task, users must share objects through con-current manipulation of both the same and distinct at-tributes. The effectiveness of teamwork is measured in terms of time taken to achieve each step, as well as the impression of users. The effect of interface is examined by comparing various combinations of walk-in cubic immersive projection technology (IPT) displays and desktop devices

    Robust pedestrian detection and tracking in crowded scenes

    Get PDF
    In this paper, a robust computer vision approach to detecting and tracking pedestrians in unconstrained crowded scenes is presented. Pedestrian detection is performed via a 3D clustering process within a region-growing framework. The clustering process avoids using hard thresholds by using bio-metrically inspired constraints and a number of plan view statistics. Pedestrian tracking is achieved by formulating the track matching process as a weighted bipartite graph and using a Weighted Maximum Cardinality Matching scheme. The approach is evaluated using both indoor and outdoor sequences, captured using a variety of different camera placements and orientations, that feature significant challenges in terms of the number of pedestrians present, their interactions and scene lighting conditions. The evaluation is performed against a manually generated groundtruth for all sequences. Results point to the extremely accurate performance of the proposed approach in all cases

    Analysis of Hand Segmentation in the Wild

    Full text link
    A large number of works in egocentric vision have concentrated on action and object recognition. Detection and segmentation of hands in first-person videos, however, has less been explored. For many applications in this domain, it is necessary to accurately segment not only hands of the camera wearer but also the hands of others with whom he is interacting. Here, we take an in-depth look at the hand segmentation problem. In the quest for robust hand segmentation methods, we evaluated the performance of the state of the art semantic segmentation methods, off the shelf and fine-tuned, on existing datasets. We fine-tune RefineNet, a leading semantic segmentation method, for hand segmentation and find that it does much better than the best contenders. Existing hand segmentation datasets are collected in the laboratory settings. To overcome this limitation, we contribute by collecting two new datasets: a) EgoYouTubeHands including egocentric videos containing hands in the wild, and b) HandOverFace to analyze the performance of our models in presence of similar appearance occlusions. We further explore whether conditional random fields can help refine generated hand segmentations. To demonstrate the benefit of accurate hand maps, we train a CNN for hand-based activity recognition and achieve higher accuracy when a CNN was trained using hand maps produced by the fine-tuned RefineNet. Finally, we annotate a subset of the EgoHands dataset for fine-grained action recognition and show that an accuracy of 58.6% can be achieved by just looking at a single hand pose which is much better than the chance level (12.5%).Comment: Accepted at CVPR 201
    corecore