149 research outputs found

    Effect of raw materials on the performance of 3D printing geopolymer: A review

    Get PDF
    Traditional construction materials such as cement products release a significant amount of carbon dioxide during their preparation and usage, negatively impacting on the environment. In contrast, 3D printing (3DP) with geopolymer materials utilises renewable and low-carbon emission raw materials. It also exhibits characteristics such as energy efficiency and resource-efficient utilisation, contributing to reduction in carbon emissions and an improvement in sustainability. Therefore, the development of 3DP geopolymer holds great significance. This paper provides a comprehensive review of 3DP geopolymer systems, examining the effect of raw materials on processability, including flowability and thixotropy, and microstructure. The study also delves into sustainability and environmental impact. The evaluation highlights the crucial role of silicon, aluminium, and calcium content in the silicate raw material, influencing the gel structure and microstructural development of the geopolymer. Aluminium promotes reaction rate, increases reaction degree, and aids in product formation. Silicon enhances the mechanical properties of geopolymer, while calcium facilitates the formation and stability of the three-dimensional network structure, further improving material strength and stability. Moreover, the reactivity of raw materials is a key factor affecting interlayer bonding and interface mechanical properties. Finally, considering sustainability, the selection of raw materials is crucial in reducing carbon emissions, energy consumption, and costs. Compared to Portland cement, 3DP geopolymer material demonstrate lower carbon emissions, energy consumption, and costs, thus making it a sustainable material

    OAFuser: Towards Omni-Aperture Fusion for Light Field Semantic Segmentation of Road Scenes

    Full text link
    Light field cameras can provide rich angular and spatial information to enhance image semantic segmentation for scene understanding in the field of autonomous driving. However, the extensive angular information of light field cameras contains a large amount of redundant data, which is overwhelming for the limited hardware resource of intelligent vehicles. Besides, inappropriate compression leads to information corruption and data loss. To excavate representative information, we propose an Omni-Aperture Fusion model (OAFuser), which leverages dense context from the central view and discovers the angular information from sub-aperture images to generate a semantically-consistent result. To avoid feature loss during network propagation and simultaneously streamline the redundant information from the light field camera, we present a simple yet very effective Sub-Aperture Fusion Module (SAFM) to embed sub-aperture images into angular features without any additional memory cost. Furthermore, to address the mismatched spatial information across viewpoints, we present Center Angular Rectification Module (CARM) realized feature resorting and prevent feature occlusion caused by asymmetric information. Our proposed OAFuser achieves state-of-the-art performance on the UrbanLF-Real and -Syn datasets and sets a new record of 84.93% in mIoU on the UrbanLF-Real Extended dataset, with a gain of +4.53%. The source code of OAFuser will be made publicly available at https://github.com/FeiBryantkit/OAFuser.Comment: The source code of OAFuser will be made publicly available at https://github.com/FeiBryantkit/OAFuse

    Computational Optics Meet Domain Adaptation: Transferring Semantic Segmentation Beyond Aberrations

    Full text link
    Semantic scene understanding with Minimalist Optical Systems (MOS) in mobile and wearable applications remains a challenge due to the corrupted imaging quality induced by optical aberrations. However, previous works only focus on improving the subjective imaging quality through computational optics, i.e. Computational Imaging (CI) technique, ignoring the feasibility in semantic segmentation. In this paper, we pioneer to investigate Semantic Segmentation under Optical Aberrations (SSOA) of MOS. To benchmark SSOA, we construct Virtual Prototype Lens (VPL) groups through optical simulation, generating Cityscapes-ab and KITTI-360-ab datasets under different behaviors and levels of aberrations. We look into SSOA via an unsupervised domain adaptation perspective to address the scarcity of labeled aberration data in real-world scenarios. Further, we propose Computational Imaging Assisted Domain Adaptation (CIADA) to leverage prior knowledge of CI for robust performance in SSOA. Based on our benchmark, we conduct experiments on the robustness of state-of-the-art segmenters against aberrations. In addition, extensive evaluations of possible solutions to SSOA reveal that CIADA achieves superior performance under all aberration distributions, paving the way for the applications of MOS in semantic scene understanding. Code and dataset will be made publicly available at https://github.com/zju-jiangqi/CIADA.Comment: Code and dataset will be made publicly available at https://github.com/zju-jiangqi/CIAD
    • …
    corecore