3,342 research outputs found

    3-Eth­oxy-2-hy­droxy­benzaldehyde 2,4-di­nitro­phenylhydrazone N,N-di­methyl­formamide monosolvate

    Get PDF
    The Schiff base of the title compound, C15H14N4O6·C3H7NO, was obtained from the condensation reaction of 3-eth­oxy-2-hy­droxy­benzaldehyde and 2,4-dinitro­phenyl­hydrazine. The dihedral angle between the benzene rings is 3.05 (10)° and intra­molecular N—H⋯O and O—H⋯O hydrogen bonds generate S(6) and S(5) ring motifs, respectively. In the crystal, the Schiff base and dimethyl­formamide solvent mol­ecules are linked by an O—H⋯O hydrogen bond

    2,2′-{[4,6-Bis(octyl­amino)-1,3,5-triazin-2-yl]aza­nedi­yl}diethanol

    Get PDF
    In the title compound, C23H46N6O2, the two hy­droxy groups are located on opposite sides of the triazine ring. One of the hy­droxy groups links with the triazine N atom via an intra­molecular O—H⋯N hydrogen bond. Inter­molecular O—H⋯N and N—H⋯O hydrogen bonding is observed in the crystal structure. π–π stacking is also observed between parallel triazine rings of adjacent mol­ecules, the centroid–centroid distance being 3.5944 (14) Å

    2-Hydroxy-3-methoxybenzaldehyde 2,4-dinitrophenylhydrazone pyridine monosolvate

    Get PDF
    The Schiff base molecule of the title compound, C14H12N4O6·C5H5N, was obtained from the condensation reaction of 2-hy­droxy-3-meth­oxy­benzaldehyde and 2,4-dinitro­phenyl­hydrazine. The C=N bond of the Schiff base has a trans arrangement and the dihedral angle between the two benzene rings is 3.49 (10)°. An intra­molecular N—H⋯O hydrogen bond generates an S(6) ring. In the crystal, O—H⋯O hydrogen bonds link the Schiff base mol­ecules

    Integrating fMRI and SNP data for biomarker identification for schizophrenia with a sparse representation based variable selection method

    Get PDF
    BACKGROUND: In recent years, both single-nucleotide polymorphism (SNP) array and functional magnetic resonance imaging (fMRI) have been widely used for the study of schizophrenia (SCZ). In addition, a few studies have been reported integrating both SNPs data and fMRI data for comprehensive analysis. METHODS: In this study, a novel sparse representation based variable selection (SRVS) method has been proposed and tested on a simulation data set to demonstrate its multi-resolution properties. Then the SRVS method was applied to an integrative analysis of two different SCZ data sets, a Single-nucleotide polymorphism (SNP) data set and a functional resonance imaging (fMRI) data set, including 92 cases and 116 controls. Biomarkers for the disease were identified and validated with a multivariate classification approach followed by a leave one out (LOO) cross-validation. Then we compared the results with that of a previously reported sparse representation based feature selection method. RESULTS: Results showed that biomarkers from our proposed SRVS method gave significantly higher classification accuracy in discriminating SCZ patients from healthy controls than that of the previous reported sparse representation method. Furthermore, using biomarkers from both data sets led to better classification accuracy than using single type of biomarkers, which suggests the advantage of integrative analysis of different types of data. CONCLUSIONS: The proposed SRVS algorithm is effective in identifying significant biomarkers for complicated disease as SCZ. Integrating different types of data (e.g. SNP and fMRI data) may identify complementary biomarkers benefitting the diagnosis accuracy of the disease

    3,6-Dibromo-9-(4-tert-butyl­benz­yl)-9H-carbazole

    Get PDF
    In the title compound, C23H21Br2N, which was synthesized by the N-alkyl­ation of 1-tert-butyl-4-(chloro­meth­yl)benzene with 3,6-dibromo-9H-carbazole, the asymmetric unit contains two unique mol­ecules. Each carbazole ring system is essentially planar, with mean deviations of 0.0077 and 0.0089 Å for the two mol­ecules. The carbazole planes make dihedral angles of 78.9 (2) and 81.8 (2)° with the planes of the respective benzene rings

    SA-YOLOv3: an efficient and accurate object detector using self-attention mechanism for autonomous driving

    Get PDF
    Object detection is becoming increasingly significant for autonomous-driving system. However, poor accuracy or low inference performance limits current object detectors in applying to autonomous driving. In this work, a fast and accurate object detector termed as SA-YOLOv3, is proposed by introducing dilated convolution and self-attention module (SAM) into the architecture of YOLOv3. Furthermore, loss function based on GIoU and focal loss is reconstructed to further optimize detection performance. With an input size of 512×512 , our proposed SA-YOLOv3 improves YOLOv3 by 2.58 mAP and 2.63 mAP on KITTI and BDD100K benchmarks, with real-time inference (more than 40 FPS). When compared with other state-of-the-art detectors, it reports better trade-off in terms of detection accuracy and speed, indicating the suitability for autonomous-driving application. To our best knowledge, it is the first method that incorporates YOLOv3 with attention mechanism, and we expect this work would guide for autonomous-driving research in the future

    DA-RDD: toward domain adaptive road damage detection across different countries

    Get PDF
    Recent advances on road damage detection relies on a large amount of labeled data, whilst collecting pavement image is labor-intensive and time-consuming. Unsupervised Domain Adaptation (UDA) provides a promising solution to adapt a source domain to the target domain, however, cross-domain crack detection is still an open problem. In this paper, we propose domain adaptive road damage detection termed as DA-RDD, by incorporating image-level with instance-level feature alignment for domain-invariant representation learning in an adversarial manner. Specifically, importance weighting is introduced to evaluate the intermediate samples for image-level alignment between domains, and we aggregate RoI-wise feature with multi-scale contextual information to recover the crack details for progressive domain alignment at instance level. Additionally, a large-scale road damage dataset (based on Road Damage Dataset 2020 (RDD2020)) named as RDD2021 is constructed with 100k synthetic labeled distress images. Extensive experimental results on damage detection across different countries demonstrate the universality and superiority of DA-RDD, and empirical studies on RDD2021 further claim its effectiveness and advancement. To our best knowledge, it is the first time to investigate domain adaptative pavement crack detection, and we expect the contributions in this work would facilitate the development of generalized road damage detection in the future

    V2VFormer: vehicle-to-vehicle cooperative perception with spatial-channel transformer

    Get PDF
    Collaborative perception aims for a holistic perceptive construction by leveraging complementary information from nearby connected automated vehicle (CAV), thereby endowing the broader probing scope. Nonetheless, how to aggregate individual observation reasonably remains an open problem. In this paper, we propose a novel vehicle-to-vehicle perception framework dubbed V2VFormer with Tr ansformer-based Co llaboration ( CoTr ). Specifically. it re-calibrates feature importance according to position correlation via Spatial-Aware Transformer ( SAT ), and then performs dynamic semantic interaction with Channel-Wise Transformer ( CWT ). Of note, CoTr is a light-weight and plug-in-play module that can be adapted seamlessly to the off-the-shelf 3D detectors with an acceptable computational overhead. Additionally, a large-scale cooperative perception dataset V2V-Set is further augmented with a variety of driving conditions, thereby providing extensive knowledge for model pretraining. Qualitative and quantitative experiments demonstrate our proposed V2VFormer achieves the state-of-the-art (SOTA) collaboration performance in both simulated and real-world scenarios, outperforming all counterparts by a substantial margin. We expect this would propel the progress of networked autonomous-driving research in the future
    corecore