11 research outputs found

    Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface

    Get PDF
    A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots

    Automated Space Classification for Network Robots in Ubiquitous Environments

    Get PDF
    Network robots provide services to users in smart spaces while being connected to ubiquitous instruments through wireless networks in ubiquitous environments. For more effective behavior planning of network robots, it is necessary to reduce the state space by recognizing a smart space as a set of spaces. This paper proposes a space classification algorithm based on automatic graph generation and naive Bayes classification. The proposed algorithm first filters spaces in order of priority using automatically generated graphs, thereby minimizing the number of tasks that need to be predefined by a human. The filtered spaces then induce the final space classification result using naive Bayes space classification. The results of experiments conducted using virtual agents in virtual environments indicate that the performance of the proposed algorithm is better than that of conventional naive Bayes space classification

    Advanced Machine Learning for Gesture Learning and Recognition Based on Intelligent Big Data of Heterogeneous Sensors

    No full text
    With intelligent big data, a variety of gesture-based recognition systems have been developed to enable intuitive interaction by utilizing machine learning algorithms. Realizing a high gesture recognition accuracy is crucial, and current systems learn extensive gestures in advance to augment their recognition accuracies. However, the process of accurately recognizing gestures relies on identifying and editing numerous gestures collected from the actual end users of the system. This final end-user learning component remains troublesome for most existing gesture recognition systems. This paper proposes a method that facilitates end-user gesture learning and recognition by improving the editing process applied on intelligent big data, which is collected through end-user gestures. The proposed method realizes the recognition of more complex and precise gestures by merging gestures collected from multiple sensors and processing them as a single gesture. To evaluate the proposed method, it was used in a shadow puppet performance that could interact with on-screen animations. An average gesture recognition rate of 90% was achieved in the experimental evaluation, demonstrating the efficacy and intuitiveness of the proposed method for editing visualized learning gestures

    Multimedia System for Real-Time Photorealistic Nonground Modeling of 3D Dynamic Environment for Remote Control System

    No full text
    Nowadays, unmanned ground vehicles (UGVs) are widely used for many applications. UGVs have sensors including multi-channel laser sensors, two-dimensional (2D) cameras, Global Positioning System receivers, and inertial measurement units (GPS–IMU). Multi-channel laser sensors and 2D cameras are installed to collect information regarding the environment surrounding the vehicle. Moreover, the GPS–IMU system is used to determine the position, acceleration, and velocity of the vehicle. This paper proposes a fast and effective method for modeling nonground scenes using multiple types of sensor data captured through a remote-controlled robot. The multi-channel laser sensor returns a point cloud in each frame. We separated the point clouds into ground and nonground areas before modeling the three-dimensional (3D) scenes. The ground part was used to create a dynamic triangular mesh based on the height map and vehicle position. The modeling of nonground parts in dynamic environments including moving objects is more challenging than modeling of ground parts. In the first step, we applied our object segmentation algorithm to divide nonground points into separate objects. Next, an object tracking algorithm was implemented to detect dynamic objects. Subsequently, nonground objects other than large dynamic ones, such as cars, were separated into two groups: surface objects and non-surface objects. We employed colored particles to model the non-surface objects. To model the surface and large dynamic objects, we used two dynamic projection panels to generate 3D meshes. In addition, we applied two processes to optimize the modeling result. First, we removed any trace of the moving objects, and collected the points on the dynamic objects in previous frames. Next, these points were merged with the nonground points in the current frame. We also applied slide window and near point projection techniques to fill the holes in the meshes. Finally, we applied texture mapping using 2D images captured using three cameras installed in the front of the robot. The results of the experiments prove that our nonground modeling method can be used to model photorealistic and real-time 3D scenes around a remote-controlled robot

    Automated Space Classification for Network Robots in Ubiquitous Environments

    No full text
    Network robots provide services to users in smart spaces while being connected to ubiquitous instruments through wireless networks in ubiquitous environments. For more effective behavior planning of network robots, it is necessary to reduce the state space by recognizing a smart space as a set of spaces. This paper proposes a space classification algorithm based on automatic graph generation and naive Bayes classification. The proposed algorithm first filters spaces in order of priority using automatically generated graphs, thereby minimizing the number of tasks that need to be predefined by a human. The filtered spaces then induce the final space classification result using naive Bayes space classification. The results of experiments conducted using virtual agents in virtual environments indicate that the performance of the proposed algorithm is better than that of conventional naive Bayes space classification

    Traversable Ground Surface Segmentation and Modeling for Real-Time Mobile Mapping

    No full text
    Remote vehicle operator must quickly decide on the motion and path. Thus, rapid and intuitive feedback of the real environment is vital for effective control. This paper presents a real-time traversable ground surface segmentation and intuitive representation system for remote operation of mobile robot. Firstly, a terrain model using voxel-based flag map is proposed for incrementally registering large-scale point clouds in real time. Subsequently, a ground segmentation method with Gibbs-Markov random field (Gibbs-MRF) model is applied to detect ground data in the reconstructed terrain. Finally, we generate a texture mesh for ground surface representation by mapping the triangles in the terrain mesh onto the captured video images. To speed up the computation, we program a graphics processing unit (GPU) to implement the proposed system for large-scale datasets in parallel. Our proposed methods were tested in an outdoor environment. The results show that ground data is segmented effectively and the ground surface is represented intuitively

    Sloped Terrain Segmentation for Autonomous Drive Using Sparse 3D Point Cloud

    No full text
    A ubiquitous environment for road travel that uses wireless networks requires the minimization of data exchange between vehicles. An algorithm that can segment the ground in real time is necessary to obtain location data between vehicles simultaneously executing autonomous drive. This paper proposes a framework for segmenting the ground in real time using a sparse three-dimensional (3D) point cloud acquired from undulating terrain. A sparse 3D point cloud can be acquired by scanning the geography using light detection and ranging (LiDAR) sensors. For efficient ground segmentation, 3D point clouds are quantized in units of volume pixels (voxels) and overlapping data is eliminated. We reduce nonoverlapping voxels to two dimensions by implementing a lowermost heightmap. The ground area is determined on the basis of the number of voxels in each voxel group. We execute ground segmentation in real time by proposing an approach to minimize the comparison between neighboring voxels. Furthermore, we experimentally verify that ground segmentation can be executed at about 19.31 ms per frame
    corecore