60 research outputs found
Real-Time Terrain Storage Generation from Multiple Sensors towards Mobile Robot Operation Interface
A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots
Wearable Computing for Defence Automation : Opportunities and Challenges in 5G Network
Peer reviewedPublisher PD
DR.CPO: Diversified and Realistic 3D Augmentation via Iterative Construction, Random Placement, and HPR Occlusion
In autonomous driving, data augmentation is commonly used for improving 3D
object detection. The most basic methods include insertion of copied objects
and rotation and scaling of the entire training frame. Numerous variants have
been developed as well. The existing methods, however, are considerably limited
when compared to the variety of the real world possibilities. In this work, we
develop a diversified and realistic augmentation method that can flexibly
construct a whole-body object, freely locate and rotate the object, and apply
self-occlusion and external-occlusion accordingly. To improve the diversity of
the whole-body object construction, we develop an iterative method that
stochastically combines multiple objects observed from the real world into a
single object. Unlike the existing augmentation methods, the constructed
objects can be randomly located and rotated in the training frame because
proper occlusions can be reflected to the whole-body objects in the final step.
Finally, proper self-occlusion at each local object level and
external-occlusion at the global frame level are applied using the Hidden Point
Removal (HPR) algorithm that is computationally efficient. HPR is also used for
adaptively controlling the point density of each object according to the
object's distance from the LiDAR. Experiment results show that the proposed
DR.CPO algorithm is data-efficient and model-agnostic without incurring any
computational overhead. Also, DR.CPO can improve mAP performance by 2.08% when
compared to the best 3D detection result known for KITTI dataset. The code is
available at https://github.com/SNU-DRL/DRCPO.gi
Automated Space Classification for Network Robots in Ubiquitous Environments
Network robots provide services to users in smart spaces while being connected to ubiquitous instruments through wireless networks in ubiquitous environments. For more effective behavior planning of network robots, it is necessary to reduce the state space by recognizing a smart space as a set of spaces. This paper proposes a space classification algorithm based on automatic graph generation and naive Bayes classification. The proposed algorithm first filters spaces in order of priority using automatically generated graphs, thereby minimizing the number of tasks that need to be predefined by a human. The filtered spaces then induce the final space classification result using naive Bayes space classification. The results of experiments conducted using virtual agents in virtual environments indicate that the performance of the proposed algorithm is better than that of conventional naive Bayes space classification
Serum 25-hydroxyvitamin D levels and risk of colorectal cancer:an age-stratified analysis
Background and aims: the role of circulating 25-hydroxyvitamin D (25(OH)D) in prevention of early-onset colorectal cancer (CRC) in young adults under 50 years is uncertain. We evaluated the age-stratified associations (<50 vs. â„50 years) :circulating 25(OH)D levels and the risk of CRC in a large sample of Korean adults.Methods: our cohort study included 236,382 participants (mean [standard deviation] age, 38.0 [9.0] years) who underwent a comprehensive health examination, including measurement of serum 25(OH)D levels. Serum 25(OH)D levels were categorized as follows: <10, 10â20, and â„20 ng/mL. CRC, along with the histologic subtype, site, and invasiveness was ascertained through linkage with the national cancer registry. Cox proportional hazard models were used to estimate hazard ratios (HRs; 95% confidence intervals [CIs]) for incident CRC according to the serum 25(OH)D status, with adjustment for potential confounders.Results: during the 1,393,741 person-years of follow-up (median, 6.5 years; interquartile range, 4.5â7.5 years), 341 participants developed CRC (incidence rate, 19.2 per 105 person-years). Among young individuals aged <50 years, serum 25(OH)D levels were inversely associated with the risk of incident CRC with HRs (95% CIs) of 0.61 (0.43â0.86) and 0.41 (0.27â0.63) for 25(OH)D 10-19 and â„20 ng/mL, respectively, with respect to the reference (<10 ng/mL) (p for trend <0.001, time-dependent model). Significant associations were evident for adenocarcinoma, colon cancer, and invasive cancers. For those aged â„50 years, associations were similar, although slightly attenuated compared to younger individuals. Conclusions: serum 25(OH)D levels may have beneficial associations with the risk of developing CRC for both early-onset and late-onset disease. <br/
Recommended from our members
Multifunctional-high resolution imaging plate based on hydrophilic graphene for digital pathology
In the present study, we showed that hydrophilic graphene can serve as an ideal imaging plate for biological specimens. Graphene being a single-atom-thick semi-metal with low secondary electron emission, array tomography analysis of serial sections of biological specimens on a graphene substrate showed excellent image quality with improved z-axis resolution, without including any conductive surface coatings. However, the hydrophobic nature of graphene makes the placement of biological specimens difficult; graphene functionalized with polydimethylsiloxane oligomer was fabricated using a simple soft lithography technique and then processed with oxygen plasma to provide hydrophilic graphene with minimal damage to graphene. High-quality scanning electron microscopy images of biological specimens free from charging effects or distortion were obtained, and the optical transparency of graphene enabled fluorescence imaging of the specimen; high-resolution correlated electron and light microscopy analysis of the specimen became possible with the hydrophilic graphene plate
Object-Aware 3D Scene Reconstruction from Single 2D Images of Indoor Scenes
Recent studies have shown that deep learning achieves excellent performance in reconstructing 3D scenes from multiview images or videos. However, these reconstructions do not provide the identities of objects, and object identification is necessary for a scene to be functional in virtual reality or interactive applications. The objects in a scene reconstructed as one mesh are treated as a single object, rather than individual entities that can be interacted with or manipulated. Reconstructing an object-aware 3D scene from a single 2D image is challenging because the image conversion process from a 3D scene to a 2D image is irreversible, and the projection from 3D to 2D reduces a dimension. To alleviate the effects of dimension reduction, we proposed a module to generate depth features that can aid the 3D pose estimation of objects. Additionally, we developed a novel approach to mesh reconstruction that combines two decoders that estimate 3D shapes with different shape representations. By leveraging the principles of multitask learning, our approach demonstrated superior performance in generating complete meshes compared to methods relying solely on implicit representation-based mesh reconstruction networks (e.g., local deep implicit functions), as well as producing more accurate shapes compared to previous approaches for mesh reconstruction from single images (e.g., topology modification networks). The proposed method was evaluated on real-world datasets. The results showed that it could effectively improve the object-aware 3D scene reconstruction performance over existing methods
- âŠ