822 research outputs found

    Characterization and modeling of toxic fly ash constituents in the environment

    Get PDF
    Coal fly ash is a by-product of coal combustion that has drawn renewed public scrutiny due to the negative environmental impacts from accidental release of this waste material from storage facilities. Historically, the leaching of toxic elements from coal fly ash into the environment has always been a major environmental concern. Despite extensive efforts into the characterization of coal fly ash, effective models for the fate and transport of toxic fly ash constituents have remained lacking, making it difficult to perform accurate environmental impact assessment for coal fly ash. To close this critical knowledge gap, the overall objective of this study was to develop a predictive model for the leaching of toxic elements from fly ash particles. First, physical properties of coal fly ash were characterized to evaluate their contribution to elemental transport. Unburned carbon was shown to contribute to the sorption of arsenic to fly ash, which slowed the release of arsenic from fly ash. In parallel, leaching properties of various elements were determined to differentiate species of varying leaching capacities, demonstrating that the majority of toxic elements were not mobile under environmentally relevant conditions. Subsequently, a mechanistic model for the dissolution of fly ash elements was developed and validated with batch kinetics studies. Furthermore, elemental dissolution was integrated with hydrodynamic modeling to describe the leaching of toxic elements from fly ash in dry disposal facilities, which was validated by column studies. The mechanistic model developed and validated in this research represents the first such model that successfully characterized the complex processes underlying the release and transport of toxic elements in coal fly ash, providing a valuable tool to predict the environment impact of coal fly ash and develop more effective management practices for both the industry and regulators

    UMIFormer: Mining the Correlations between Similar Tokens for Multi-View 3D Reconstruction

    Full text link
    In recent years, many video tasks have achieved breakthroughs by utilizing the vision transformer and establishing spatial-temporal decoupling for feature extraction. Although multi-view 3D reconstruction also faces multiple images as input, it cannot immediately inherit their success due to completely ambiguous associations between unstructured views. There is not usable prior relationship, which is similar to the temporally-coherence property in a video. To solve this problem, we propose a novel transformer network for Unstructured Multiple Images (UMIFormer). It exploits transformer blocks for decoupled intra-view encoding and designed blocks for token rectification that mine the correlation between similar tokens from different views to achieve decoupled inter-view encoding. Afterward, all tokens acquired from various branches are compressed into a fixed-size compact representation while preserving rich information for reconstruction by leveraging the similarities between tokens. We empirically demonstrate on ShapeNet and confirm that our decoupled learning method is adaptable for unstructured multiple images. Meanwhile, the experiments also verify our model outperforms existing SOTA methods by a large margin. Code will be available at https://github.com/GaryZhu1996/UMIFormer.Comment: Accepted by ICCV 202

    Long-Range Grouping Transformer for Multi-View 3D Reconstruction

    Full text link
    Nowadays, transformer networks have demonstrated superior performance in many computer vision tasks. In a multi-view 3D reconstruction algorithm following this paradigm, self-attention processing has to deal with intricate image tokens including massive information when facing heavy amounts of view input. The curse of information content leads to the extreme difficulty of model learning. To alleviate this problem, recent methods compress the token number representing each view or discard the attention operations between the tokens from different views. Obviously, they give a negative impact on performance. Therefore, we propose long-range grouping attention (LGA) based on the divide-and-conquer principle. Tokens from all views are grouped for separate attention operations. The tokens in each group are sampled from all views and can provide macro representation for the resided view. The richness of feature learning is guaranteed by the diversity among different groups. An effective and efficient encoder can be established which connects inter-view features using LGA and extract intra-view features using the standard self-attention layer. Moreover, a novel progressive upsampling decoder is also designed for voxel generation with relatively high resolution. Hinging on the above, we construct a powerful transformer-based network, called LRGT. Experimental results on ShapeNet verify our method achieves SOTA accuracy in multi-view reconstruction. Code will be available at https://github.com/LiyingCV/Long-Range-Grouping-Transformer.Comment: Accepted to ICCV 202

    DeepInteraction: 3D Object Detection via Modality Interaction

    Full text link
    Existing top-performance 3D object detectors typically rely on the multi-modal fusion strategy. This design is however fundamentally restricted due to overlooking the modality-specific useful information and finally hampering the model performance. To address this limitation, in this work we introduce a novel modality interaction strategy where individual per-modality representations are learned and maintained throughout for enabling their unique characteristics to be exploited during object detection. To realize this proposed strategy, we design a DeepInteraction architecture characterized by a multi-modal representational interaction encoder and a multi-modal predictive interaction decoder. Experiments on the large-scale nuScenes dataset show that our proposed method surpasses all prior arts often by a large margin. Crucially, our method is ranked at the first position at the highly competitive nuScenes object detection leaderboard.Comment: To appear at NeurIPS 2022. 16 pages, 7 figur

    GARNet: Global-Aware Multi-View 3D Reconstruction Network and the Cost-Performance Tradeoff

    Full text link
    Deep learning technology has made great progress in multi-view 3D reconstruction tasks. At present, most mainstream solutions establish the mapping between views and shape of an object by assembling the networks of 2D encoder and 3D decoder as the basic structure while they adopt different approaches to obtain aggregation of features from several views. Among them, the methods using attention-based fusion perform better and more stable than the others, however, they still have an obvious shortcoming -- the strong independence of each view during predicting the weights for merging leads to a lack of adaption of the global state. In this paper, we propose a global-aware attention-based fusion approach that builds the correlation between each branch and the global to provide a comprehensive foundation for weights inference. In order to enhance the ability of the network, we introduce a novel loss function to supervise the shape overall and propose a dynamic two-stage training strategy that can effectively adapt to all reconstructors with attention-based fusion. Experiments on ShapeNet verify that our method outperforms existing SOTA methods while the amount of parameters is far less than the same type of algorithm, Pix2Vox++. Furthermore, we propose a view-reduction method based on maximizing diversity and discuss the cost-performance tradeoff of our model to achieve a better performance when facing heavy input amount and limited computational cost

    (E)-2,3-Bis(4-methoxy­phen­yl)acrylic acid

    Get PDF
    In the title mol­ecule, C17H16O4, the angle between the aromatic ring planes is 69.1 (6)°. The crystal structure is stabilized by inter­molecular O—H⋯O hydrogen bonds; mol­ecules related by a centre of symmetry are linked to form inversion dimers

    The value of a novel percutaneous lung puncture clamp biopsy technique in the diagnosis of pulmonary nodules

    Get PDF
    Abstract Background: Computed tomography-guided percutaneous lung biopsy is a crucial method to determine pulmonary anomalies, and is highly accurate in detecting evidence of malignancies, allowing medical practitioners to identify the stage of malignancy and thus help to plan the treatment regimens of patients.Objective: To explore the clinical application of a new computed tomography-guided percutaneous lung puncture clamp biopsy technique in the diagnosis of pulmonary nodules, characterized by ground-glass opacity on chest computed tomography images.Methods: A unique instrument named ‘combined percutaneous lung biopsy forceps’, consisting of a biopsy forceps, a 15-gauge coaxial needle and needle core, was designed. The new tool was used to obtain specimens in nine patients with pulmonary ground-glass opacity. The specimen volumes and the safety of using the instrument were measured. The samples obtained were also assessed to see if they were sufficient for conducting histological tests.Result: Samples were obtained in all nine patients – a success rate of 100%. Consistently, the volume of each specimen was sufficient to make a histological diagnosis. No serious complications, such as pneumothorax – primary spontaneous pneumothorax or secondary spontaneous pneumothorax – occurred during the biopsy.Conclusions: The application of this new tool in obtaining tissue specimens in patients with pulmonary ground-glass opacity under the guidance of chest computed tomography was invaluable in terms of its high accuracy and safety. Moreover, its effect was better compared to using a fine-needle aspiration biopsy or a cutting-needle biopsy. Therefore, this instrument can be used for histological diagnosis. [Ethiop. J. Health Dev. 2021; 35(2):85-90]Key words: Ground-glass opacity; percutaneous lung puncture clamp biopsy; fine-needle aspiration biopsy; cutting-needle biops

    INT: Towards Infinite-frames 3D Detection with An Efficient Framework

    Full text link
    It is natural to construct a multi-frame instead of a single-frame 3D detector for a continuous-time stream. Although increasing the number of frames might improve performance, previous multi-frame studies only used very limited frames to build their systems due to the dramatically increased computational and memory cost. To address these issues, we propose a novel on-stream training and prediction framework that, in theory, can employ an infinite number of frames while keeping the same amount of computation as a single-frame detector. This infinite framework (INT), which can be used with most existing detectors, is utilized, for example, on the popular CenterPoint, with significant latency reductions and performance improvements. We've also conducted extensive experiments on two large-scale datasets, nuScenes and Waymo Open Dataset, to demonstrate the scheme's effectiveness and efficiency. By employing INT on CenterPoint, we can get around 7% (Waymo) and 15% (nuScenes) performance boost with only 2~4ms latency overhead, and currently SOTA on the Waymo 3D Detection leaderboard.Comment: accepted by ECCV202

    Dynamic 1H-MRS assessment of brain tumors: A novel approach for differential diagnosis of glioma

    Get PDF
    PurposeTo determine whether the changes of [Cho/NAA] ratio in patients with glioma, measured by dynamic 1H-MRS can be used to differentiate between high-grade and low-grade gliomas.Materials and MethodsThis prospective study was approved by the institutional ethics committee. Written informed consent was obtained. Forty-nine patients with biopsy-proven glioma and 20 normal control subjects were recruited in this study. The maximum [Cho/NAA] ratios, acquired at 0 min, and at 6 min, were calculated and assessed from volume of interests (VOI) in the tumor areas and in the surrounding normal tissue for each patient. Absolute difference in the [Cho/NAA] ratios, from MRS acquired at 0 and 6 min, in high-grade glioma, low-grade glioma, and control subjects were compared.ResultsThe maximum [Cho/NAA] ratio acquired from the tumor area at the 0 min is 6.08 ± 2.02, which was significantly different (p = .017) from that acquired after 6 min, 4.87 ± 2.13. The [Cho/NAA] ratio from the surrounding normal tissue area did not change significantly from spectra acquired at different times (0 min, 6 min). Absolute difference in [Cho/NAA] ratios acquired at 0 and 6 min time points were significantly higher (P < 0.001) in high-grade glioma (= 3.86 ± 3.31) than in low-grade glioma (= 0.81 ± 0.90), and control subjects (0.061 ± 0.026, P = 0.000), while there was no significantly difference in low-grade glioma and control subjects.ConclusionsDynamic 1H-MRS can be useful for differential diagnosis between high-grade and low-grade gliomas as well as insight into the heterogeneity within the tumor
    corecore