171 research outputs found

    Cross-Inferential Networks for Source-free Unsupervised Domain Adaptation

    Full text link
    One central challenge in source-free unsupervised domain adaptation (UDA) is the lack of an effective approach to evaluate the prediction results of the adapted network model in the target domain. To address this challenge, we propose to explore a new method called cross-inferential networks (CIN). Our main idea is that, when we adapt the network model to predict the sample labels from encoded features, we use these prediction results to construct new training samples with derived labels to learn a new examiner network that performs a different but compatible task in the target domain. Specifically, in this work, the base network model is performing image classification while the examiner network is tasked to perform relative ordering of triplets of samples whose training labels are carefully constructed from the prediction results of the base network model. Two similarity measures, cross-network correlation matrix similarity and attention consistency, are then developed to provide important guidance for the UDA process. Our experimental results on benchmark datasets demonstrate that our proposed CIN approach can significantly improve the performance of source-free UDA.Comment: ICIP2023 accepte

    Metric Subregularity for Subsmooth Generalized Constraint Equations in Banach Spaces

    Get PDF
    This paper is devoted to metric subregularity of a kind of generalized constraint equations. In particular, in terms of coderivatives and normal cones, we provide some necessary and sufficient conditions for subsmooth generalized constraint equations to be metrically subregular and strongly metrically subregular in general Banach spaces and Asplund spaces, respectively

    Metric Subregularity for Subsmooth Generalized Constraint Equations in Banach Spaces

    Get PDF
    This paper is devoted to metric subregularity of a kind of generalized constraint equations. In particular, in terms of coderivatives and normal cones, we provide some necessary and sufficient conditions for subsmooth generalized constraint equations to be metrically subregular and strongly metrically subregular in general Banach spaces and Asplund spaces, respectively

    Cotton Pests and Diseases Detection Based on Image Processing

    Get PDF
    Extract the damaged image form the cotton image in order to measure the damage ratio of the cotton leaf which caused by the diseases or pests. Several algorithms like image enhancement, image filtering which suit for cotton leaf processing were explored in this paper. Three different color models for extracting the damaged image from cotton leaf images were implemented, namely RGB color model, HSI color model, and YCbCr color model. The ratio of damage (γ) was chosen as feature to measure  the degree of damage which caused by diseases or pests. This paper also shows the comparison of the results obtained by the implementing in different color models, the comparison of results shows good accuracy in both color models and YCbCr color space is considered as the best color model for extracting the damaged image. DOI: http://dx.doi.org/10.11591/telkomnika.v11i6.272

    Neuro-Modulated Hebbian Learning for Fully Test-Time Adaptation

    Full text link
    Fully test-time adaptation aims to adapt the network model based on sequential analysis of input samples during the inference stage to address the cross-domain performance degradation problem of deep neural networks. We take inspiration from the biological plausibility learning where the neuron responses are tuned based on a local synapse-change procedure and activated by competitive lateral inhibition rules. Based on these feed-forward learning rules, we design a soft Hebbian learning process which provides an unsupervised and effective mechanism for online adaptation. We observe that the performance of this feed-forward Hebbian learning for fully test-time adaptation can be significantly improved by incorporating a feedback neuro-modulation layer. It is able to fine-tune the neuron responses based on the external feedback generated by the error back-propagation from the top inference layers. This leads to our proposed neuro-modulated Hebbian learning (NHL) method for fully test-time adaptation. With the unsupervised feed-forward soft Hebbian learning being combined with a learned neuro-modulator to capture feedback from external responses, the source model can be effectively adapted during the testing process. Experimental results on benchmark datasets demonstrate that our proposed method can significantly improve the adaptation performance of network models and outperforms existing state-of-the-art methods.Comment: CVPR2023 accepte

    Benchmarking Neural Decoding Backbones towards Enhanced On-edge iBCI Applications

    Full text link
    Traditional invasive Brain-Computer Interfaces (iBCIs) typically depend on neural decoding processes conducted on workstations within laboratory settings, which prevents their everyday usage. Implementing these decoding processes on edge devices, such as the wearables, introduces considerable challenges related to computational demands, processing speed, and maintaining accuracy. This study seeks to identify an optimal neural decoding backbone that boasts robust performance and swift inference capabilities suitable for edge deployment. We executed a series of neural decoding experiments involving nonhuman primates engaged in random reaching tasks, evaluating four prospective models, Gated Recurrent Unit (GRU), Transformer, Receptance Weighted Key Value (RWKV), and Selective State Space model (Mamba), across several metrics: single-session decoding, multi-session decoding, new session fine-tuning, inference speed, calibration speed, and scalability. The findings indicate that although the GRU model delivers sufficient accuracy, the RWKV and Mamba models are preferable due to their superior inference and calibration speeds. Additionally, RWKV and Mamba comply with the scaling law, demonstrating improved performance with larger data sets and increased model sizes, whereas GRU shows less pronounced scalability, and the Transformer model requires computational resources that scale prohibitively. This paper presents a thorough comparative analysis of the four models in various scenarios. The results are pivotal in pinpointing an optimal backbone that can handle increasing data volumes and is viable for edge implementation. This analysis provides essential insights for ongoing research and practical applications in the field

    Structure, morphology and magnetic properties of flowerlike gamma-Fe2O3@NiO core/shell nanocomposites synthesized from different precursor concentrations

    Get PDF
    The flowerlike gamma-Fe2O3@NiO core/shell nanocomposites are synthesized by the two-step method. Their structure and morphology can be controlled by tuning the precursor concentration. Microstructural analysis reveals that all the samples have distinct core/shell structure without impurities, and the NiO shells are built of many irregular nanosheets which enclose the surface of gamma-Fe2O3 core. As the precursor concentration decreases (i.e., more NiO content), the NiO grain grows significantly, and the thickness of NiO shells increases. Magnetic experiments are performed to analyze the influences of different microstructures on magnetic properties of samples and we have the following two results. First, at 5 K, along with increasing thickness of NiO shell, the saturation magnetization increases, while the residual magnetization decreases slightly. Second, the hysteresis loops under cooling field demonstrate that the value of exchange bias effect fluctuates between 13 Oe and 17 Oe. This is mainly because of the NiO shell that (i) is composed of irregular nanosheets with disordered orientations, and (ii) does not form a complete coating around gamma-Fe2O3 core
    corecore