11 research outputs found

    Human detection from aerial imagery for automatic counting of shellfish gatherers

    Get PDF
    International audienceAutomatic human identification from aerial image time series or video sequences is a challenging issue. We propose here a complete processing chain that operates in the context of recreational shellfish gatherers counting in a coastal environment (the Gulf of Morbihan, South Brittany, France). It starts from a series of aerial photographs and builds a mosaic in order to prevent multiple occurrences of the same objects on the overlapping parts of aerial images. To do so, several stitching techniques are reviewed and discussed in the context of large aerial scenes. Then people detection is addressed through a sliding window analysis combining the HOG descriptor and a supervised classifier. Several classification methods are compared, including SVM, Random Forests, and AdaBoost. Experimental results show the interest of the proposed approach, and provides directions for future research

    Automatic Samples Selection Using Histogram of Oriented Gradients (HOG) Feature Distance

    Get PDF
    Finding victims at a disaster site is the primary goal of Search-and-Rescue (SAR) operations. Many technologies created from research for searching disaster victims through aerial imaging. but, most of them are difficult to detect victims at tsunami disaster sites with victims and backgrounds which are look similar. This research collects post-tsunami aerial imaging data from the internet to builds dataset and model for detecting tsunami disaster victims. Datasets are built based on distance differences from features every sample using Histogram-of-Oriented-Gradient (HOG) method. We use the longest distance to collect samples from photo to generate victim and non-victim samples. We claim steps to collect samples by measuring HOG feature distance from all samples. the longest distance between samples will take as a candidate to build the dataset, then classify victim (positives) and non-victim (negatives) samples manually. The dataset of tsunami disaster victims was re-analyzed using cross-validation Leave-One-Out (LOO) with Support-Vector-Machine (SVM) method. The experimental results show the performance of two test photos with 61.70% precision, 77.60% accuracy, 74.36% recall and f-measure 67.44% to distinguish victim (positives) and non-victim (negatives)

    A Survey on Pedestrian Detection

    Get PDF
    行人检测是计算机视觉中的研究热点和难点,本文对2005-2011这段时间内的行人检测技术中最核心的两个问题—特征提取、分类器与定位—的研究现状进行综述.文章中首先将这些问题的处理方法分为不同的类别,将行人特征分为底层特征、基于学习的特征和混合特征,分类与定位方法分为滑动窗口法和超越滑动窗口法,并从纵横两个方向对这些方法的优缺点进行分析和比较,然后总结了构建行人检测器在实现细节上的一些经验,最后对行人检测技术的未来进行展望.Pedestrian detection is an active area of research with challenge in computer vision.This study conducts a detailed survey on state-of-the-art pedestrian detection methods from 2005 to 2011,focusing on the two most important problems:feature extraction,the classification and localization.We divided these methods into different categories;pedestrian features are divided into three subcategories:low-level feature,learning-based feature and hybrid feature.On the other hand,classification and localization is also divided into two sub-categories:sliding window and beyond sliding window.According to the taxonomy,the pros and cons of different approaches are discussed.Finally,some experiences of how to construct a robust pedestrian detector are presented and future research trends are proposed.国家自然科学基金(No.60873179);高等学校博士学科点专项科研基金(No.20090121110032);深圳市科技计划-基础研究(No.JC200903180630A);深圳市科技研发基金-深港创新圈计划(No.ZYB200907110169A);福建省教育厅基金(No.JA10196

    General Concepts for Human Supervision of Autonomous Robot Teams

    Get PDF
    For many dangerous, dirty or dull tasks like in search and rescue missions, deployment of autonomous teams of robots can be beneficial due to several reasons. First, robots can replace humans in the workspace. Second, autonomous robots reduce the workload of a human compared to teleoperated robots, and therefore multiple robots can in principle be supervised by a single human. Third, teams of robots allow distributed operation in time and space. This thesis investigates concepts of how to efficiently enable a human to supervise and support an autonomous robot team, as common concepts for teleoperation of robots do not apply because of the high mental workload. The goal is to find a way in between the two extremes of full autonomy and pure teleoperation, by allowing to adapt the robots’ level of autonomy to the current situation and the needs of the human supervisor. The methods presented in this thesis make use of the complementary strengths of humans and robots, by letting the robots do what they are good at, while the human should support the robots in situations that correspond to the human strengths. To enable this type of collaboration between a human and a robot team, the human needs to have an adequate knowledge about the current state of the robots, the environment, and the mission. For this purpose, the concept of situation overview (SO) has been developed in this thesis, which is composed of the two components robot SO and mission SO. Robot SO includes information about the state and activities of each single robot in the team, while mission SO deals with the progress of the mission and the cooperation between the robots. For obtaining SO a new event-based communication concept is presented in this thesis, that allows the robots to aggregate information into discrete events using methods from complex event processing. The quality and quantity of the events that are actually sent to the supervisor can be adapted during runtime by defining positive and negative policies for (not) sending events that fulfill specific criteria. This reduces the required communication bandwidth compared to sending all available data. Based on SO, the supervisor is enabled to efficiently interact with the robot team. Interactions can be initiated either by the human or by the robots. The developed concept for robot-initiated interactions is based on queries, that allow the robots to transfer decisions to another process or the supervisor. Various modes for answering the queries, ranging from fully autonomous to pure human decisions, allow to adapt the robots’ level of autonomy during runtime. Human-initiated interactions are limited to high-level commands, whereas interactions on the action level (e. g., teleoperation) are avoided, to account for the specific strengths of humans and robots. These commands can in principle be applied to quite general classes of task allocation methods for autonomous robot teams, e. g., in terms of specific restrictions, which are introduced into the system as constraints. In that way, the desired allocations emerge implicitly because of the introduced constraints, and the task allocation method does not need to be aware of the human supervisor in the loop. This method is applicable to different task allocation approaches, e. g., instantaneous or time-extended task assignments, and centralized or distributed algorithms. The presented methods are evaluated by a number of different experiments with physical and simulated scenarios from urban search and rescue as well as robot soccer, and during robot competitions. The results show that with these methods a human supervisor can significantly improve the robot team performance

    Novel robust computer vision algorithms for micro autonomous systems

    Get PDF
    People detection and tracking are an essential component of many autonomous platforms, interactive systems and intelligent vehicles used in various search and rescues operations and similar humanitarian applications. Currently, researchers are focusing on the use of vision sensors such as cameras due to their advantages over other sensor types. Cameras are information rich, relatively inexpensive and easily available. Additionally, 3D information is obtained from stereo vision, or by triangulating over several frames in monocular configurations. Another method to obtain 3D data is by using RGB-D sensors (e.g. Kinect) that provide both image and depth data. This method is becoming more attractive over the past few years due to its affordable price and availability for researchers. The aim of this research was to find robust multi-target detection and tracking algorithms for Micro Autonomous Systems (MAS) that incorporate the use of the RGB-D sensor. Contributions include the discovery of novel robust computer vision algorithms. It proposed a new framework for human body detection, from video file, to detect a single person adapted from Viola and Jones framework. The 2D Multi Targets Detection and Tracking (MTDT) algorithm applied the Gaussian Mixture Model (GMM) to reduce noise in the pre-processing stage. Blob analysis was used to detect targets, and Kalman filter was used to track targets. The 3D MTDT extends beyond 2D with the use of depth data from the RGB-D sensor in the pre-processing stage. Bayesian model was employed to provide multiple cues. It includes detection of the upper body, face, skin colour, motion and shape. Kalman filter proved for speed and robustness of the track management. Simultaneous Localisation and Mapping (SLAM) fusing with 3D information was investigated. The new framework introduced front end and back end processing. The front end consists of localisation steps, post refinement and loop closing system. The back-end focus on the post-graph optimisation to eliminate errors.The proposed computer vision algorithms proved for better speed and robustness. The frameworks produced impressive results. New algorithms can be used to improve performances in real time applications including surveillance, vision navigation, environmental perception and vision-based control system on MAS
    corecore