9,172 research outputs found
Three dimensional asset documentation using terrestrial laser scanner technology
Asset documentation is a detailed record or inventory of the properties located within a room or a building. It is important to record the assets in case of property loss happen inside the premise especially when that premise caught fire, earthquake, robbery and others. The instrument used in this study is Faro Laser Scanner Photon 120/20. The object of the study is the computer room of Photogrammetry Lab, Faculty of Geoinformation and Real Estate. The final output of this study is the 3D model of the assets available inside the building. Before 3D model can be formed, the scanned data which is in the form of point cloud generated from the laser scanner have to be registered and georeferenced in order to combine the scans. The combine scans is the representation of the whole area of work surveyed from every scan points. These processes use Faro Scene, software that comes together with the laser scanner. By introducing this method, large scale asset documentation such as for factories and schools would be very beneficial rather than conventional method. The next process is to model the point cloud using AutoCAD 2011. Every item available on the room such as desks, chairs, cubicles, computers, whiteboard, projectors and cupboard are modeled and each of these items was inserted with attributes so that we can know the information of each item
Vision-model-based Real-time Localization of Unmanned Aerial Vehicle for Autonomous Structure Inspection under GPS-denied Environment
UAVs have been widely used in visual inspections of buildings, bridges and
other structures. In either outdoor autonomous or semi-autonomous flights
missions strong GPS signal is vital for UAV to locate its own positions.
However, strong GPS signal is not always available, and it can degrade or fully
loss underneath large structures or close to power lines, which can cause
serious control issues or even UAV crashes. Such limitations highly restricted
the applications of UAV as a routine inspection tool in various domains. In
this paper a vision-model-based real-time self-positioning method is proposed
to support autonomous aerial inspection without the need of GPS support.
Compared to other localization methods that requires additional onboard
sensors, the proposed method uses a single camera to continuously estimate the
inflight poses of UAV. Each step of the proposed method is discussed in detail,
and its performance is tested through an indoor test case.Comment: 8 pages, 5 figures, submitted to i3ce 201
3D Reconstruction & Assessment Framework based on affordable 2D Lidar
Lidar is extensively used in the industry and mass-market. Due to its
measurement accuracy and insensitivity to illumination compared to cameras, It
is applied onto a broad range of applications, like geodetic engineering, self
driving cars or virtual reality. But the 3D Lidar with multi-beam is very
expensive, and the massive measurements data can not be fully leveraged on some
constrained platforms. The purpose of this paper is to explore the possibility
of using cheap 2D Lidar off-the-shelf, to preform complex 3D Reconstruction,
moreover, the generated 3D map quality is evaluated by our proposed metrics at
the end. The 3D map is constructed in two ways, one way in which the scan is
performed at known positions with an external rotary axis at another plane. The
other way, in which the 2D Lidar for mapping and another 2D Lidar for
localization are placed on a trolley, the trolley is pushed on the ground
arbitrarily. The generated maps by different approaches are converted to
octomaps uniformly before the evaluation. The similarity and difference between
two maps will be evaluated by the proposed metrics thoroughly. The whole
mapping system is composed of several modular components. A 3D bracket was made
for assembling of the Lidar with a long range, the driver and the motor
together. A cover platform made for the IMU and 2D Lidar with a shorter range
but high accuracy. The software is stacked up in different ROS packages.Comment: 7 pages, 9 Postscript figures. Accepted by 2018 IEEE International
Conference on Advanced Intelligent Mechatronic
Stereo Vision: A Comparison of Synthetic Imagery vs. Real World Imagery for the Automated Aerial Refueling Problem
Missions using unmanned aerial vehicles have increased in the past decade. Currently, there is no way to refuel these aircraft. Accomplishing automated aerial refueling can be made possible using the stereo vision system on a tanker. Real world experiments for the automated aerial refueling problem are expensive and time consuming. Currently, simulations performed in a virtual world have shown promising results using computer vision. It is possible to use the virtual world as a substitute environment for the real world. This research compares the performance of stereo vision algorithms on synthetic and real world imagery
Virtual reality for 3D histology: multi-scale visualization of organs with interactive feature exploration
Virtual reality (VR) enables data visualization in an immersive and engaging
manner, and it can be used for creating ways to explore scientific data. Here,
we use VR for visualization of 3D histology data, creating a novel interface
for digital pathology. Our contribution includes 3D modeling of a whole organ
and embedded objects of interest, fusing the models with associated
quantitative features and full resolution serial section patches, and
implementing the virtual reality application. Our VR application is multi-scale
in nature, covering two object levels representing different ranges of detail,
namely organ level and sub-organ level. In addition, the application includes
several data layers, including the measured histology image layer and multiple
representations of quantitative features computed from the histology. In this
interactive VR application, the user can set visualization properties, select
different samples and features, and interact with various objects. In this
work, we used whole mouse prostates (organ level) with prostate cancer tumors
(sub-organ objects of interest) as example cases, and included quantitative
histological features relevant for tumor biology in the VR model. Due to
automated processing of the histology data, our application can be easily
adopted to visualize other organs and pathologies from various origins. Our
application enables a novel way for exploration of high-resolution,
multidimensional data for biomedical research purposes, and can also be used in
teaching and researcher training
UAV-Based Photogrammetric Profiling of Lessor\u27s Quarry in South Hero, Vermont
Structure from Motion (SfM) Photogrammetry has been increasingly utilized as an effective tool for research in the geosciences. This study applies SfM photogrammetry to concepts of structural geology and uses it to illustrate the three-dimensional geometry of geologic structures at Lessorâs Quarry in South Hero, VT. This field site is important because it is widely used for teaching the three-dimensional visualization of geological features in Geology field classes. Three-dimensional visualization is a critical skill for success as a geologist, but typically is very difficult for students to learn. The goal of this project was to create interactive three-dimensional models of Lessorâs Quarry with illustrated projections of geologic features that can be used by students to observe the features from different viewpoints. I utilized an Unmanned Arial Vehicle (UAV) to obtain images of the quarry and used AgisoftTM Metashape photogrammetry software to produce the three-dimensional models of the quarry. I added illustrations of the projected geological structures using freely available 3D modeling software including Autodesk SketchUp and Autodesk Maya to explore how they relate across the walls. These 3D models will help students develop three-dimensional visualization skills that can be applied to common geological problems. They illustrate concepts such as apparent dip and show how features project into the subsurface. Understanding these concepts is necessary in order to visualize the trigonometric calculations needed to determine where geological resources can be efficiently explored or extracted. This technology is an important resource that can be applied to a wide range of studies
Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition
The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future
Pre-Trained Driving in Localized Surroundings with Semantic Radar Information and Machine Learning
Entlang der Signalverarbeitungskette von Radar Detektionen bis zur Fahrzeugansteuerung, diskutiert diese Arbeit eine semantischen Radar Segmentierung, einen darauf aufbauenden Radar SLAM, sowie eine im Verbund realisierte autonome Parkfunktion. Die Radarsegmentierung der (statischen) Umgebung wird durch ein Radar-spezifisches neuronales Netzwerk RadarNet erreicht. Diese Segmentierung ermöglicht die Entwicklung des semantischen Radar Graph-SLAM SERALOC. Auf der Grundlage der semantischen Radar SLAM Karte wird eine beispielhafte autonome ParkfunktionalitÀt in einem realen VersuchstrÀger umgesetzt.
Entlang eines aufgezeichneten Referenzfades parkt die Funktion ausschlieĂlich auf Basis der Radar Wahrnehmung mit bisher unerreichter Positioniergenauigkeit.
Im ersten Schritt wird ein Datensatz von 8.2 · 10^6 punktweise semantisch gelabelten Radarpunktwolken ĂŒber eine Strecke von 2507.35m generiert. Es sind keine vergleichbaren DatensĂ€tze dieser Annotationsebene und Radarspezifikation öffentlich verfĂŒgbar. Das ĂŒberwachte
Training der semantischen Segmentierung RadarNet erreicht 28.97% mIoU auf sechs Klassen.
AuĂerdem wird ein automatisiertes Radar-Labeling-Framework SeRaLF vorgestellt, welches das Radarlabeling multimodal mittels Referenzkameras und LiDAR unterstĂŒtzt.
FĂŒr die kohĂ€rente Kartierung wird ein Radarsignal-Vorfilter auf der Grundlage einer Aktivierungskarte entworfen, welcher Rauschen und andere dynamische Mehrwegreflektionen unterdrĂŒckt. Ein speziell fĂŒr Radar angepasstes Graph-SLAM-Frontend mit Radar-Odometrie
Kanten zwischen Teil-Karten und semantisch separater NDT Registrierung setzt die vorgefilterten semantischen Radarscans zu einer konsistenten metrischen Karte zusammen. Die Kartierungsgenauigkeit und die Datenassoziation werden somit erhöht und der erste semantische Radar Graph-SLAM fĂŒr beliebige statische Umgebungen realisiert.
Integriert in ein reales Testfahrzeug, wird das Zusammenspiel der live RadarNet Segmentierung und des semantischen Radar Graph-SLAM anhand einer rein Radar-basierten autonomen ParkfunktionalitĂ€t evaluiert. Im Durchschnitt ĂŒber 42 autonome Parkmanöver
(â
3.73 km/h) bei durchschnittlicher ManöverlĂ€nge von â
172.75m wird ein Median absoluter Posenfehler von 0.235m und End-Posenfehler von 0.2443m erreicht, der vergleichbare
Radar-Lokalisierungsergebnisse um â 50% ĂŒbertrifft. Die Kartengenauigkeit von verĂ€nderlichen, neukartierten Orten ĂŒber eine Kartierungsdistanz von â
165m ergibt eine â 56%-ige Kartenkonsistenz bei einer Abweichung von â
0.163m. FĂŒr das autonome Parken wurde ein gegebener Trajektorienplaner und Regleransatz verwendet
- âŠ