3,770 research outputs found

    The 3D Structure of N132D in the LMC: A Late-Stage Young Supernova Remnant

    Full text link
    We have used the Wide Field Spectrograph (WiFeS) on the 2.3m telescope at Siding Spring Observatory to map the [O III] 5007{\AA} dynamics of the young oxygen-rich supernova remnant N132D in the Large Magellanic Cloud. From the resultant data cube, we have been able to reconstruct the full 3D structure of the system of [O III] filaments. The majority of the ejecta form a ring of ~12pc in diameter inclined at an angle of 25 degrees to the line of sight. We conclude that SNR N132D is approaching the end of the reverse shock phase before entering the fully thermalized Sedov phase of evolution. We speculate that the ring of oxygen-rich material comes from ejecta in the equatorial plane of a bipolar explosion, and that the overall shape of the SNR is strongly influenced by the pre-supernova mass loss from the progenitor star. We find tantalizing evidence of a polar jet associated with a very fast oxygen-rich knot, and clear evidence that the central star has interacted with one or more dense clouds in the surrounding ISM.Comment: Accepted for Publication in Astrophysics & Space Science, 18pp, 8 figure

    High-quality dense 3D point clouds with active stereo and a miniaturizable interferometric pattern projector

    Get PDF
    We have built and characterized a compact, simple and flexible 3D camera based on interferometric fringe projection and stereo reconstruction. The camera uses multi-frame active stereo as basis for 3D reconstruction, providing full-field 3D images with 3D measurement standard deviation of 0.09 mm, 12.5 Hz 3D image capture rate and 3D image resolution of 500 × 500 pixels. Interferometric projection enables a compact, low-power projector that consumes < 1 W of electrical power. The key component in the projector, a movable micromirror, has undergone initial vibration, thermal vacuum cycling (TVAC) and radiation testing, with no observed component degradation. The system's low power, small size and component longevity makes it well suitable for space applications.publishedVersio

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Accurate dense depth from light field technology for object segmentation and 3D computer vision

    Get PDF

    RGB-D And Thermal Sensor Fusion: A Systematic Literature Review

    Full text link
    In the last decade, the computer vision field has seen significant progress in multimodal data fusion and learning, where multiple sensors, including depth, infrared, and visual, are used to capture the environment across diverse spectral ranges. Despite these advancements, there has been no systematic and comprehensive evaluation of fusing RGB-D and thermal modalities to date. While autonomous driving using LiDAR, radar, RGB, and other sensors has garnered substantial research interest, along with the fusion of RGB and depth modalities, the integration of thermal cameras and, specifically, the fusion of RGB-D and thermal data, has received comparatively less attention. This might be partly due to the limited number of publicly available datasets for such applications. This paper provides a comprehensive review of both, state-of-the-art and traditional methods used in fusing RGB-D and thermal camera data for various applications, such as site inspection, human tracking, fault detection, and others. The reviewed literature has been categorised into technical areas, such as 3D reconstruction, segmentation, object detection, available datasets, and other related topics. Following a brief introduction and an overview of the methodology, the study delves into calibration and registration techniques, then examines thermal visualisation and 3D reconstruction, before discussing the application of classic feature-based techniques as well as modern deep learning approaches. The paper concludes with a discourse on current limitations and potential future research directions. It is hoped that this survey will serve as a valuable reference for researchers looking to familiarise themselves with the latest advancements and contribute to the RGB-DT research field.Comment: 33 pages, 20 figure

    From light rays to 3D models

    Get PDF
    corecore