32 research outputs found

    Comparison of machine learning algorithms for detecting coral reef

    Get PDF
    (Received: 2014/07/31 - Accepted: 2014/09/23)This work focuses on developing a fast coral reef detector, which is used for an autonomous underwater vehicle, AUV. A fast detection secures the AUV stabilization respect to an area of reef as fast as possible, and prevents devastating collisions. We use the algorithm of Purser et al. (2009) because of its precision. This detector has two parts: feature extraction that uses Gabor Wavelet filters, and feature classification that uses machine learning based on Neural Networks. Due to the extensive time of the Neural Networks, we exchange for a classification algorithm based on Decision Trees. We use a database of 621 images of coral reef in Belize (110 images for training and 511 images for testing). We implement the bank of Gabor Wavelets filters using C++ and the OpenCV library. We compare the accuracy and running time of 9 machine learning algorithms, whose result was the selection of the Decision Trees algorithm. Our coral detector performs 70ms of running time in comparison to 22s executed by the algorithm of Purser et al. (2009)

    Autonomous Shopping Cart Platform for People with Mobility Impairments

    Get PDF
    International audienceProviding a platform able to interact with a spe- cific user is a challenging problem for assistance technologies. Among the many platforms accomplishing this task, we address the problem of designing an autonomous shopping cart. We assume that the shopping cart is set-up on a unicycle-like robot endowed with two sensors: an RGB-D camera and a planar laser range finder. To combine the information from these two sensors, a data fusion algorithm has been developed using a particle filter, augmented with a k-clustering step to extract person estimations. The problem of stabilizing the robot's position at a fixed distance from the user has been solved through classical control design. Results on a real mobile platform verify the effectiveness of the approach here proposed

    A portable three-dimensional LIDAR-based system for long-term and wide-area people behavior measurement:

    Get PDF
    It is important to measure and analyze people behavior to design systems which interact with people. This article describes a portable people behavior measurement system using a three-dimensional LIDAR. In this system, an observer carries the system equipped with a three-dimensional Light Detection and Ranging (LIDAR) and follows persons to be measured while keeping them in the sensor view. The system estimates the sensor pose in a three-dimensional environmental map and tracks the target persons. It enables long-term and wide-area people behavior measurements which are hard for existing people tracking systems. As a field test, we recorded the behavior of professional caregivers attending elderly persons with dementia in a hospital. The preliminary analysis of the behavior reveals how the caregivers decide the attending position while checking the surrounding people and environment. Based on the analysis result, empirical rules to design the behavior of attendant robots are proposed

    Tracking people within groups with rgb-d data

    Get PDF
    Abstract-This paper proposes a very fast and robust multi-people tracking algorithm suitable for mobile platforms equipped with a RGB-D sensor. Our approach features a novel depth-based sub-clustering method explicitly designed for detecting people within groups or near the background and a three-term joint likelihood for limiting drifts and ID switches. Moreover, an online learned appearance classifier is proposed, that robustly specializes on a track while using the other detections as negative examples. Tests have been performed with data acquired from a mobile robot in indoor environments and on a publicly available dataset acquired with three RGB-D sensors and results have been evaluated with the CLEAR MOT metrics. Our method reaches near state of the art performance and very high frame rates in our distributed ROS-based CPU implementation

    OpenPTrack: Open Source Multi-Camera Calibration and People Tracking for RGB-D Camera Networks

    Get PDF
    OpenPTrack is an open source software for multi-camera calibration and people tracking in RGB-D camera networks. It allows to track people in big volumes at sensor frame rate and currently supports a heterogeneous set of 3D sensors. In this work, we describe its user-friendly calibration procedure, which consists of simple steps with real-time feedback that allow to obtain accurate results in estimating the camera poses that are then used for tracking people. On top of a calibration based on moving a checkerboard within the tracking space and on a global optimization of cameras and checkerboards poses, a novel procedure which aligns people detections coming from all sensors in a x-y-time space is used for refining camera poses. While people detection is executed locally, in the machines connected to each sensor, tracking is performed by a single node which takes into account detections from all over the network. Here we detail how a cascade of algorithms working on depth point clouds and color, infrared and disparity images is used to perform people detection from different types of sensors and in any indoor light condition. We present experiments showing that a considerable improvement can be obtained with the proposed calibration refinement procedure that exploits people detections and we compare Kinect v1, Kinect v2 and Mesa SR4500 performance for people tracking applications. OpenPTrack is based on the Robot Operating System and the Point Cloud Library and has already been adopted in networks composed of up to ten imagers for interactive arts, education, culture and human\u2013robot interaction applications
    corecore