5 research outputs found
A Novel Improved Probability-Guided RANSAC Algorithm for Robot 3D Map Building
This paper presents a novel improved RANSAC algorithm based on probability and DS evidence theory to deal with the robust pose estimation in robot 3D map building. In this proposed RANSAC algorithm, a parameter model is estimated by using a random sampling test set. Based on this estimated model, all points are tested to evaluate the fitness of current parameter model and their probabilities are updated by using a total probability formula during the iterations. The maximum size of inlier set containing the test point is taken into account to get a more reliable evaluation for test points by using DS evidence theory. Furthermore, the theories of forgetting are utilized to filter out the unstable inliers and improve the stability of the proposed algorithm. In order to boost a high performance, an inverse mapping sampling strategy is adopted based on the updated probabilities of points. Both the simulations and real experimental results demonstrate the feasibility and effectiveness of the proposed algorithm
SMA-Net: Deep learning-based identification and fitting of CAD models from point clouds
Identifcation and ftting is an important task in reverse engineering and virtual/augmented reality. Compared to the traditional
approaches, carrying out such tasks with a deep learning-based method have much room to exploit. This paper presents
SMA-Net (Spatial Merge Attention Network), a novel deep learning-based end-to-end bottom-up architecture, specifcally
focused on fast identifcation and ftting of CAD models from point clouds. The network is composed of three parts whose
strengths are clearly highlighted: voxel-based multi-resolution feature extractor, spatial merge attention mechanism and
multi-task head. It is trained with both virtually-generated point clouds and as-scanned ones created from multiple instances
of CAD models, themselves obtained with randomly generated parameter values. Using this data generation pipeline, the
proposed approach is validated on two diferent data sets that have been made publicly available: robot data set for Industry
4.0 applications, and furniture data set for virtual/augmented reality. Experiments show that this reconstruction strategy
achieves compelling and accurate results in a very high speed, and that it is very robust on real data obtained for instance
by laser scanner and Kinect
Semantic location extraction from crowdsourced data
Crowdsourced Data (CSD) has recently received increased attention in many application areas including disaster management. Convenience of production and use, data currency and abundancy are some of the key reasons for attracting this high interest. Conversely, quality issues like incompleteness, credibility and relevancy prevent the direct use of such data in important applications like disaster management. Moreover, location information availability of CSD is problematic as it remains very low in many crowd sourced platforms such as Twitter. Also, this recorded location is mostly related to the mobile device or user location and often does not represent the event location. In CSD, event location is discussed descriptively in the comments in addition to the recorded location (which is generated by means of mobile device's GPS or mobile communication network). This study attempts to semantically extract the CSD location information with the help of an ontological Gazetteer and other available resources. 2011 Queensland flood tweets and Ushahidi Crowd Map data were semantically analysed to extract the location information with the support of Queensland Gazetteer which is converted to an ontological gazetteer and a global gazetteer. Some preliminary results show that the use of ontologies and semantics can improve the accuracy of place name identification of CSD and the process of location information extraction
Multiple View Texture Mapping: A Rendering Approach Designed for Driving Simulation
Simulation provides a safe and controlled environment ideal for human
testing [49, 142, 120]. Simulation of real environments has reached
new heights in terms of photo-realism. Often, a team of professional
graphical artists would have to be hired to compete with modern commercial
simulators. Meanwhile, machine vision methods are currently
being developed that attempt to automatically provide geometrically
consistent and photo-realistic 3D models of real scenes [189, 139, 115,
19, 140, 111, 132]. Often the only requirement is a set of images of
that scene. A road engineer wishing to simulate the environment of a
real road for driving experiments could potentially use these tools.
This thesis develops a driving simulator that uses machine vision
methods to reconstruct a real road automatically. A computer graphics
method called projective texture mapping is applied to enhance
the photo-realism of the 3D models[144, 43]. This essentially creates
a virtual projector in the 3D environment to automatically assign image
coordinates to a 3D model. These principles are demonstrated
using custom shaders developed for an OpenGL rendering pipeline.
Projective texture mapping presents a list of challenges to overcome,
these include reverse projection and projection onto surfaces not immediately
in front of the projector [53]. A significant challenge was
the removal of dynamic foreground objects. 3D reconstruction systems
create 3D models based on static objects captured in images.
Dynamic objects are rarely reconstructed. Projective texture mapping
of images, including these dynamic objects, can result in visual
artefacts. A workflow is developed to resolve this, resulting in videos
and 3D reconstructions of streets with no moving vehicles on the scene.
The final simulator using 3D reconstruction and projective texture
mapping is then developed. The rendering camera had a motion
model introduced to enable human interaction. The final system is
presented, experimentally tested, and future potential works are discussed