16 research outputs found
Transfer Learning Based Traffic Sign Recognition Using Inception-v3 Model
Traffic sign recognition is critical for advanced driver assistant system and road infrastructure survey. Traditional traffic sign recognition algorithms can't efficiently recognize traffic signs due to its limitation, yet deep learning-based technique requires huge amount of training data before its use, which is time consuming and labor intensive. In this study, transfer learning-based method is introduced for traffic sign recognition and classification, which significantly reduces the amount of training data and alleviates computation expense using Inception-v3 model. In our experiment, Belgium Traffic Sign Database is chosen and augmented by data pre-processing technique. Subsequently the layer-wise features extracted using different convolution and pooling operations are compared and analyzed. Finally transfer learning-based model is repetitively retrained several times with fine-tuning parameters at different learning rate, and excellent reliability and repeatability are observed based on statistical analysis. The results show that transfer learning model can achieve a high-level recognition performance in traffic sign recognition, which is up to 99.18 % of recognition accuracy at 0.05 learning rate (average accuracy of 99.09 %). This study would be beneficial in other traffic infrastructure recognition such as road lane marking and roadside protection facilities, and so on
DA-RDD: toward domain adaptive road damage detection across different countries
Recent advances on road damage detection relies on a large amount of labeled data, whilst collecting pavement image is labor-intensive and time-consuming. Unsupervised Domain Adaptation (UDA) provides a promising solution to adapt a source domain to the target domain, however, cross-domain crack detection is still an open problem. In this paper, we propose domain adaptive road damage detection termed as DA-RDD, by incorporating image-level with instance-level feature alignment for domain-invariant representation learning in an adversarial manner. Specifically, importance weighting is introduced to evaluate the intermediate samples for image-level alignment between domains, and we aggregate RoI-wise feature with multi-scale contextual information to recover the crack details for progressive domain alignment at instance level. Additionally, a large-scale road damage dataset (based on Road Damage Dataset 2020 (RDD2020)) named as RDD2021 is constructed with 100k synthetic labeled distress images. Extensive experimental results on damage detection across different countries demonstrate the universality and superiority of DA-RDD, and empirical studies on RDD2021 further claim its effectiveness and advancement. To our best knowledge, it is the first time to investigate domain adaptative pavement crack detection, and we expect the contributions in this work would facilitate the development of generalized road damage detection in the future
Enhanced Photocatalytic Activity for Degradation of Methyl Orange over Silica-Titania
Silica-modified titania (SMT) powders with different atomic ratios of silica to titanium (Rx) were successfully synthesized by a simple ultrasonic irradiation technique. The prepared samples were characterized by X-ray diffraction (XRD), FT-IR spectroscopy, transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), and ultraviolet visible spectroscopy. The specific surface area was measured according to BET theory. Results indicate that the addition of silica to titania can suppress the crystalline size growth and the transformation of anatase phase to rutile phase of titania, enlarge specific surface area of the titania particles, and result in a blue shift of absorption edge compared to pure titania. The photocatalytic activity of the SMT samples was evaluated by decolorizing methyl orange aqueous solutions under UV-visible light irradiation. It was found in our study that this activity was affected by silica content, calcination temperature, H2SO4, and oxidants such as KIO4, (NH4)2S2O8 and H2O2. The results reveal that the photocatalytic activity of 0.1-SMT catalyst is the best among all samples calcined at 550°C for 1 h and it is 1.56 times higher than that of Degussa P-25 titania, which is a widely used commercial TiO2 made by Germany Degussa company and has been most widely used in industry as photocatalyst, antiultraviolet product, and thermal stabilizer. The optimal calcination temperature for preparation was 550°C. The photocatalytic activity of SMT samples is significantly enhanced by H2SO4 solution treatment and oxidants
V2VFormer: vehicle-to-vehicle cooperative perception with spatial-channel transformer
Collaborative perception aims for a holistic perceptive construction by leveraging complementary information from nearby connected automated vehicle (CAV), thereby endowing the broader probing scope. Nonetheless, how to aggregate individual observation reasonably remains an open problem. In this paper, we propose a novel vehicle-to-vehicle perception framework dubbed V2VFormer with Tr ansformer-based Co llaboration ( CoTr ). Specifically. it re-calibrates feature importance according to position correlation via Spatial-Aware Transformer ( SAT ), and then performs dynamic semantic interaction with Channel-Wise Transformer ( CWT ). Of note, CoTr is a light-weight and plug-in-play module that can be adapted seamlessly to the off-the-shelf 3D detectors with an acceptable computational overhead. Additionally, a large-scale cooperative perception dataset V2V-Set is further augmented with a variety of driving conditions, thereby providing extensive knowledge for model pretraining. Qualitative and quantitative experiments demonstrate our proposed V2VFormer achieves the state-of-the-art (SOTA) collaboration performance in both simulated and real-world scenarios, outperforming all counterparts by a substantial margin. We expect this would propel the progress of networked autonomous-driving research in the future
SA-YOLOv3: an efficient and accurate object detector using self-attention mechanism for autonomous driving
Object detection is becoming increasingly significant for autonomous-driving system. However, poor accuracy or low inference performance limits current object detectors in applying to autonomous driving. In this work, a fast and accurate object detector termed as SA-YOLOv3, is proposed by introducing dilated convolution and self-attention module (SAM) into the architecture of YOLOv3. Furthermore, loss function based on GIoU and focal loss is reconstructed to further optimize detection performance. With an input size of 512×512 , our proposed SA-YOLOv3 improves YOLOv3 by 2.58 mAP and 2.63 mAP on KITTI and BDD100K benchmarks, with real-time inference (more than 40 FPS). When compared with other state-of-the-art detectors, it reports better trade-off in terms of detection accuracy and speed, indicating the suitability for autonomous-driving application. To our best knowledge, it is the first method that incorporates YOLOv3 with attention mechanism, and we expect this work would guide for autonomous-driving research in the future
V2VFormer: multi-modal vehicle-to-vehicle cooperative perception via global-local transformer
Multi-vehicle cooperative perception has recently emerged for facilitating long-range and large-scale perception ability of connected automated vehicles (CAVs). Nonetheless, enormous efforts formulate collaborative perception as LiDAR-only 3D detection paradigm, neglecting the significance and complementary of dense image. In this work, we construct the first multi-modal vehicle-to-vehicle cooperative perception framework dubbed as V2VFormer ++ , where individual camera-LiDAR representation is incorporated with dynamic channel fusion (DCF) at bird’s-eye-view (BEV) space and ego-centric BEV maps from adjacent vehicles are aggregated by global-local transformer module. Specifically, channel-token mixer (CTM) with MLP design is developed to capture global response among neighboring CAVs, and position-aware fusion (PAF) further investigate the spatial correlation between each ego-networked map in a local perspective. In this manner, we could strategically determine which CAVs are desirable for collaboration and how to aggregate the foremost information from them. Quantitative and qualitative experiments are conducted on both publicly-available OPV2V and V2X-Sim 2.0 benchmarks, and our proposed V2VFormer ++ reports the state-of-the-art cooperative perception performance, demonstrating its effectiveness and advancement. Moreover, ablation study and visualization analysis further suggest the strong robustness against diverse disturbances from real-world scenarios
3D-DFM: anchor-free multimodal 3-D object detection with dynamic fusion module for autonomous driving
Recent advances in cross-modal 3D object detection rely heavily on anchor-based methods, and however, intractable anchor parameter tuning and computationally expensive postprocessing severely impede an embedded system application, such as autonomous driving. In this work, we develop an anchor-free architecture for efficient camera-light detection and ranging (LiDAR) 3D object detection. To highlight the effect of foreground information from different modalities, we propose a dynamic fusion module (DFM) to adaptively interact images with point features via learnable filters. In addition, the 3D distance intersection-over-union (3D-DIoU) loss is explicitly formulated as a supervision signal for 3D-oriented box regression and optimization. We integrate these components into an end-to-end multimodal 3D detector termed 3D-DFM. Comprehensive experimental results on the widely used KITTI dataset demonstrate the superiority and universality of 3D-DFM architecture, with competitive detection accuracy and real-time inference speed. To the best of our knowledge, this is the first work that incorporates an anchor-free pipeline with multimodal 3D object detection
CL3D: Camera-LiDAR 3D object detection with point feature enhancement and point-guided fusion
Camera-LiDAR 3D object detection has been extensively investigated due to its significance for many real-world applications. However, there are still of great challenges to address the intrinsic data difference and perform accurate feature fusion among two modalities. To these ends, we propose a two-stream architecture termed as CL3D, that integrates with point enhancement module, point-guided fusion module with IoU-aware head for cross-modal 3D object detection. Specifically, pseudo LiDAR is firstly generated from RGB image, and point enhancement module (PEM) is then designed to enhance the raw LiDAR with pseudo point. Moreover, point-guided fusion module (PFM) is developed to find image-point correspondence at different resolutions, and incorporate semantic with geometric features in a point-wise manner. We also investigate the inconsistency between localization confidence and classification score in 3D detection, and introduce IoU-aware prediction head (IoU Head) for accurate box regression. Comprehensive experiments are conducted on publicly available KITTI dataset, and CL3D reports the outstanding detection performance compared to both single- and multi-modal 3D detectors, demonstrating its effectiveness and competitiveness
Enantioselective Interaction of Acid α‑Naphthyl Acetate Esterase with Chiral Organophosphorus Insecticides
Many previous works have demonstrated
that acetylcholinesterase
(AChE) was enantioselectively inhibited by chiral organophosphorus
insecticides (OPs) and that a significant difference in reactivation
existed for AChE inactivated by (1<i>R</i>)- versus (1<i>S</i>,3<i>S</i>)-stereoisomers of isomalathion. It
had been known that α-naphthyl acetate esterase (ANAE), an enzyme
which might play an essential role in the growth of plants and the
defense of plants against environmental stress by regulating the concentration
of hormones in plants, can be inhibited by OPs. However, it was unknown
whether interaction of ANAE with chiral OPs was enantioselective.
The present work investigated the inhibition kinetics and spontaneous
reactivation of ANAE inactivated by enantiomers of malaoxon, isomalathion,
and methamidophos. The order of inhibition potency is (<i>R</i>) > (<i>S</i>) for malaoxon, (1<i>R</i>,3<i>R</i>) > (1<i>R</i>,3<i>S</i>) > (1<i>S</i>,3<i>R</i>) > (1<i>S</i>,3<i>S</i>) for isomalathion, and (<i>S</i>) > (<i>R</i>) for methamidophos according to bimolecular rate constants
of inhibition
(<i>k</i><sub>i</sub>), which is consistent with the order
observed in the enantioselective inhibition of AChE by malaoxon, isomalathion,
and methamidophos. The difference in spontaneous reactivation of AChE
inactivated between (1<i>R</i>)- and (1<i>S</i>,3<i>S</i>)-isomers of isomalathion is conserved for ANAE.
The observations indicated ANAE and AChE have similar selective inhibition
kinetics and postinhibitory reactions in reaction with chiral OPs