52 research outputs found

    Deep learning based low-dose synchrotron radiation CT reconstruction

    Full text link
    Synchrotron radiation sources are widely used in various fields, among which computed tomography (CT) is one of the most important. The amount of effort expended by the operator varies depending on the subject. If the number of angles needed to be used can be greatly reduced under the condition of similar imaging effects, the working time and workload of the experimentalists will be greatly reduced. However, decreasing the sampling angle can produce serious artifacts and blur the details. We try to use a deep learning model which can build high quality reconstruction sparse data sampling from the angle of the image and ResAttUnet are put forward. ResAttUnet is roughly a symmetrical U-shaped network that incorporates similar mechanisms to ResNet and attention. In addition, the mixed precision is adopted to reduce the demand for video memory of the model and training time.Comment: 10 pages, 6 figure

    Accurate semantic segmentation of RGB-D images for indoor navigation

    Get PDF
    We introduce an approach of semantic segmentation to detect various objects for the mobile robot system “ROSWITHA” (RObot System WITH Autonomy). Developing a semantic segmentation method is a challenging research field in machine learning and computer vision. The semantic segmentation approach is robust compared with the other traditional state-of- the-art methods for understanding the surroundings. Semantic segmentation is a method that presents the most information about the object, such as classification and localization of the object on the image level and the pixel level, thus precisely depicting the shape and position of the object in space. In this work, we experimented with verifying the effectiveness of semantic segmentation when used as an aid to improving the performance of robust indoor navigation tasks. To make the output map of semantic segmentation meaningful, and enhance the model accuracy, points cloud data were extracted from the depth camera, which fuses the data origi- nated from RGB and depth stream to improve the speed and accuracy compared with different machine learning algorithms. We compared our modified approach with the state-of-the-art methods and compared the results when trained with the available dataset NYUv2. Moreover, the model was then trained with the customized indoor dataset 1 (three classes) and dataset 2 (seven classes) to achieve a robust classification of the objects in the dynamic environment of Frankfurt University of Applied Sciences laboratories. The model attains a global accuracy of 98.2%, with a mean intersection over union (mIoU) of 90.9% for dataset 1. For dataset 2, the model achieves a global accuracy of 95.6%, with an mIoU of 72%. Furthermore, the evaluations were performed in our indoor scenario.14 página
    • …
    corecore