175 research outputs found

    Visual duration aftereffect is position invariant

    Get PDF
    Adaptation to relatively long or short sensory events leads to a negative aftereffect, such that the durations of the subsequent events within a certain range appear to be contracted or expanded. The distortion in perceived duration is presumed to arise from the adaptation of duration detectors. Here, we focus on the positional sensitivity of those visual duration detectors by exploring whether the duration aftereffect may be constrained by the visual location of stimuli. We adopted two different paradigms, one that tests for transfer across visual hemifields, and the other that tests for simultaneous selectivity between visual hemifields. By employing these experimental designs, we show that the duration aftereffect strongly transfers across visual hemifields and is not contingent on them. The lack of position specificity suggests that duration detectors in the visual system may operate at a relatively later stage of sensory processing

    Study on NO Heterogeneous Reduction Mechanism under Gasification Condition

    Get PDF
    Chemisorption of NO and successive heterogeneous reduction mechanisms on the well-defined char models under carbon/char-CO2 gasification condition were investigated using density functional theory at the B3LYP/6-31G (d) level of theory. The characteristics of gasification process were concluded and incorporated into the theoretical calculations by establishing three gasification char models and taking into account the presence of CO in ambient gas pool. The results indicate that both the configuration of char model and adsorption mode have significant influence on the NO adsorption energy. Intensive gasification surface is likely to be thermally unfavorable and the O-down mode is regarded as the most inactive approach for NO’s adsorbing. Finally, NO heterogeneous reduction mechanisms on the three char models under gasification are proposed based on detailed analysis on thermodynamic data and atomic bond populations

    Probability-based Global Cross-modal Upsampling for Pansharpening

    Full text link
    Pansharpening is an essential preprocessing step for remote sensing image processing. Although deep learning (DL) approaches performed well on this task, current upsampling methods used in these approaches only utilize the local information of each pixel in the low-resolution multispectral (LRMS) image while neglecting to exploit its global information as well as the cross-modal information of the guiding panchromatic (PAN) image, which limits their performance improvement. To address this issue, this paper develops a novel probability-based global cross-modal upsampling (PGCU) method for pan-sharpening. Precisely, we first formulate the PGCU method from a probabilistic perspective and then design an efficient network module to implement it by fully utilizing the information mentioned above while simultaneously considering the channel specificity. The PGCU module consists of three blocks, i.e., information extraction (IE), distribution and expectation estimation (DEE), and fine adjustment (FA). Extensive experiments verify the superiority of the PGCU method compared with other popular upsampling methods. Additionally, experiments also show that the PGCU module can help improve the performance of existing SOTA deep learning pansharpening methods. The codes are available at https://github.com/Zeyu-Zhu/PGCU.Comment: 10 pages, 5 figure

    New Interpretations of Normalization Methods in Deep Learning

    Full text link
    In recent years, a variety of normalization methods have been proposed to help train neural networks, such as batch normalization (BN), layer normalization (LN), weight normalization (WN), group normalization (GN), etc. However, mathematical tools to analyze all these normalization methods are lacking. In this paper, we first propose a lemma to define some necessary tools. Then, we use these tools to make a deep analysis on popular normalization methods and obtain the following conclusions: 1) Most of the normalization methods can be interpreted in a unified framework, namely normalizing pre-activations or weights onto a sphere; 2) Since most of the existing normalization methods are scaling invariant, we can conduct optimization on a sphere with scaling symmetry removed, which can help stabilize the training of network; 3) We prove that training with these normalization methods can make the norm of weights increase, which could cause adversarial vulnerability as it amplifies the attack. Finally, a series of experiments are conducted to verify these claims.Comment: Accepted by AAAI 202

    Low-Illumination Road Image Enhancement by Fusing Retinex Theory and Histogram Equalization

    Get PDF
    Low-illumination image enhancement can provide more information than the original image in low-light scenarios, e.g., nighttime driving. Traditional deep-learning-based image enhancement algorithms struggle to balance the performance between the overall illumination enhancement and local edge details, due to limitations of time and computational cost. This paper proposes a histogram equalization–multiscale Retinex combination approach (HE-MSR-COM) that aims at solving the blur edge problem of HE and the uncertainty in selecting parameters for image illumination enhancement in MSR. The enhanced illumination information is extracted from the low-frequency component in the HE-enhanced image, and the enhanced edge information is obtained from the high-frequency component in the MSR-enhanced image. By designing adaptive fusion weights of HE and MSR, the proposed method effectively combines enhanced illumination and edge information. The experimental results show that HE-MSR-COM improves the image quality by 23.95% and 10.6% in two datasets, respectively, compared with HE, contrast-limited adaptive histogram equalization (CLAHE), MSR, and gamma correction (GC)

    The Cognitive and Neural Mechanism of Temporal Expectation

    Full text link

    Study on the Modeling Method and Influencing Parameters of Sandblasting Process for Blade Grinding

    No full text
    Grinding the surface of wind turbine blades is one of the key production processes before its service. In order to solve the problem of high labor cost and low precision in manual grinding wheel grinding of wind turbine blades, the sandblasting process for blade grinding is proposed. The best parameters of sandblasting process applied to wind turbine blades are investigated by simulation. In this study, the simulation model of sandblasting process is established, and the fluid analysis of blades’ grinding using sandblasting process is carried out with the variables of blasting pressure, abrasive diameter, and blast distance. Simulation results show that the best parameters are 0.7 MPa for blasting pressure, 500 mm for spraying distance, and 0.4 mm for abrasive diameter. Finally, the tensile test is carried out on the samples’ grind by the traditional grinding process and sandblasting process. The results show that the paint surface grind by sandblasting process can bear a higher load. This study proves the feasibility of the sandblasting process in the composite materials’ grinding of wind turbine blades.</jats:p
    corecore