6,673 research outputs found
Temporally coherent 3D point cloud video segmentation in generic scenes
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Video segmentation is an important building block for high level applications, such as scene understanding and interaction analysis. While outstanding results are achieved in this field by the state-of-the-art learning and model-based methods, they are restricted to certain types of scenes or require a large amount of annotated training data to achieve object segmentation in generic scenes. On the other hand, RGBD data, widely available with the introduction of consumer depth sensors, provide actual world 3D geometry compared with 2D images. The explicit geometry in RGBD data greatly help in computer vision tasks, but the lack of annotations in this type of data may also hinder the extension of learning-based methods to RGBD. In this paper, we present a novel generic segmentation approach for 3D point cloud video (stream data) thoroughly exploiting the explicit geometry in RGBD. Our proposal is only based on low level features, such as connectivity and compactness. We exploit temporal coherence by representing the rough estimation of objects in a single frame with a hierarchical structure and propagating this hierarchy along time. The hierarchical structure provides an efficient way to establish temporal correspondences at different scales of object-connectivity and to temporally manage the splits and merges of objects. This allows updating the segmentation according to the evidence observed in the history. The proposed method is evaluated on several challenging data sets, with promising results for the presented approach.Peer ReviewedPostprint (author's final draft
Fast and Robust Detection of Fallen People from a Mobile Robot
This paper deals with the problem of detecting fallen people lying on the
floor by means of a mobile robot equipped with a 3D depth sensor. In the
proposed algorithm, inspired by semantic segmentation techniques, the 3D scene
is over-segmented into small patches. Fallen people are then detected by means
of two SVM classifiers: the first one labels each patch, while the second one
captures the spatial relations between them. This novel approach showed to be
robust and fast. Indeed, thanks to the use of small patches, fallen people in
real cluttered scenes with objects side by side are correctly detected.
Moreover, the algorithm can be executed on a mobile robot fitted with a
standard laptop making it possible to exploit the 2D environmental map built by
the robot and the multiple points of view obtained during the robot navigation.
Additionally, this algorithm is robust to illumination changes since it does
not rely on RGB data but on depth data. All the methods have been thoroughly
validated on the IASLAB-RGBD Fallen Person Dataset, which is published online
as a further contribution. It consists of several static and dynamic sequences
with 15 different people and 2 different environments
An Approach for Segmentation of Colored Images with Seeded Spatial Enhancement
In the image analysis, image segmentation is the operation that divides image into set of different segments. The work deals about common color image segmentation techniques and methods. Image enhancement is done using four connected approach for seed selection of the image. An algorithm is implemented on the basis of manual seed selection. It select a seed point in an image an then check for its four neighbor pixels connected to that particular seed point. And segment that image in foreground and background framing. At the end, the evaluation criterion will be introduced and applied on the algorithms results. Five most used image segmentation algorithms, namely, efficient graph based, K means, Mean shift, Expectation maximization and hybrid method are compared with implemented algorithm
Recommended from our members
A study on detection of risk factors of a toddler’s fall injuries using visual dynamic motion cues
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The research in this thesis is intended to aid caregivers’ supervision of toddlers to prevent accidental injuries, especially injuries due to falls in the home environment. There have been very few attempts to develop an automatic system to tackle young children’s accidents despite the fact that they are particularly vulnerable to home accidents and a caregiver cannot give continuous supervision. Vision-based analysis methods have been developed to recognise toddlers’ fall risk factors related to changes in their behaviour or environment. First of all, suggestions to prevent fall events of young children at home were collected from well-known organisations for child safety. A large number of fall records of toddlers who had sought treatment at a hospital were analysed to identify a toddler’s fall risk factors. The factors include clutter being a tripping or slipping hazard on the floor and a toddler moving around or climbing furniture or room structures.
The major technical problem in detecting the risk factors is to classify foreground objects into human and non-human, and novel approaches have been proposed for the classification. Unlike most existing studies, which focus on human appearance such as skin colour for human detection, the approaches addressed in this thesis use cues related to dynamic motions. The first cue is based on the fact that there is relative motion between human body parts while typical indoor clutter does not have such parts with diverse motions. In addition, other motion cues are employed to differentiate a human from a pet since a pet also moves its parts diversely. They are angle changes of ellipse fitted to each object and history of its actual heights to capture the various posture changes and different body size of pets. The methods work well as long as foreground regions are correctly segmented
A robust people detection, tracking, and counting system
The ability to track moving people is a key aspect of autonomous robot systems in real-world environments. Whilst for many tasks knowing the approximate positions of people may be sufficient, the ability to identify unique people is needed to accurately count people in the real world. To accomplish the people counting task, a robust system for people detection, tracking and identification is needed. This paper presents our approach for robust real world people detection, tracking and counting using a PrimeSense RGBD camera. Our past research, upon which we built, is highlighted and novel methods to solve the problems of sensor self-localisation, false negatives due to persons physically interacting with the environment, and track misassociation due to crowdedness are presented. An empirical evaluation of our approach in a major Sydney public train station (N=420) was conducted, and results demonstrating our methods in the complexities of this challenging environment are presented
- …