751 research outputs found
Attentional Guidance from Multiple Working Memory Representations: Evidence from Eye Movements
Recent studies have shown that the representation of an item in visual working memory (VWM) can bias
the deployment of attention to stimuli in the visual scene possessing the same features. When multiple
item representations are simultaneously held in VWM, whether these representations, especially
those held in a non-prioritized or accessory status, are able to bias attention, is still controversial. In
the present study we adopted an eye tracking technique to shed light on this issue. In particular, we
implemented a manipulation aimed at prioritizing one of the VWM representation to an active status,
and tested whether attention could be guided by both the prioritized and the accessory representations
when they reappeared as distractors in a visual search task. Notably, in Experiment 1, an analysis of
first fixation proportion (FFP) revealed that both the prioritized and the accessory representations were
able to capture attention suggesting a significant attentional guidance effect. However, such effect
was not present in manual response times (RT). Most critically, in Experiment 2, we used a more robust
experimental design controlling for different factors that might have played a role in shaping these
findings. The results showed evidence for attentional guidance from the accessory representation in
both manual RTs and FFPs. Interestingly, FFPs showed a stronger attentional bias for the prioritized
representation than for the accessory representation across experiments. The overall findings suggest
that multiple VWM representations, even the accessory representation, can simultaneously interact
with visual attention
The Euclidean Space is Evil: Hyperbolic Attribute Editing for Few-shot Image Generation
Few-shot image generation is a challenging task since it aims to generate
diverse new images for an unseen category with only a few images. Existing
methods suffer from the trade-off between the quality and diversity of
generated images. To tackle this problem, we propose Hyperbolic Attribute
Editing (HAE), a simple yet effective method. Unlike other methods that work in
Euclidean space, HAE captures the hierarchy among images using data from seen
categories in hyperbolic space. Given a well-trained HAE, images of unseen
categories can be generated by moving the latent code of a given image toward
any meaningful directions in the Poincar\'e disk with a fixing radius. Most
importantly, the hyperbolic space allows us to control the semantic diversity
of the generated images by setting different radii in the disk. Extensive
experiments and visualizations demonstrate that HAE is capable of not only
generating images with promising quality and diversity using limited data but
achieving a highly controllable and interpretable editing process
Minimum Ventilation for Modern Broiler Facilities
New functions for whole-house broiler heat production as a function of bird age using modern straight run broiler growth rates are presented and compared to values in the literature. The approximations are based on field measurements of environmental conditions in modern broiler housing, using a technique that matches predicted to actual fuel use to estimate partitioning between latent and sensible heat. Development of a program utilizing these approximations to compute ventilation and heating requirements for temperature and humidity control in broiler housing is described. The program utilizes steady-state heat and moisture balances commonly used for design purposes, with hourly or daily time steps. Data input includes bird weight and numbers, house data including overall R-value and size, inside and outside temperature, and relative humidity. The program estimates ventilation for temperature and moisture control, minimum ventilation rate, and supplemental heat required. Example predictions are provided
Fast simulation of airfoil flow field via deep neural network
Computational Fluid Dynamics (CFD) has become an indispensable tool in the
optimization design, and evaluation of aircraft aerodynamics. However, solving
the Navier-Stokes (NS) equations is a time-consuming, memory demanding and
computationally expensive task. Artificial intelligence offers a promising
avenue for flow field solving. In this work, we propose a novel deep learning
framework for rapidly reconstructing airfoil flow fields. Channel attention and
spatial attention modules are utilized in the downsampling stage of the UNet to
enhance the feature learning capabilities of the deep learning model.
Additionally, integrating the predicted flow field values generated by the deep
learning model into the NS equation solver validates the credibility of the
flow field prediction results. The NACA series airfoils were used to validate
the prediction accuracy and generalization of the deep learning model. The
experimental results represent the deep learning model achieving flow field
prediction speeds three orders of magnitude faster than CFD solver.
Furthermore, the CFD solver integrated with deep learning model demonstrates a
threefold acceleration compared to CFD solver. By extensively mining historical
flow field data, an efficient solution is derived for the rapid simulation of
aircraft flow fields
Bias-Conflict Sample Synthesis and Adversarial Removal Debias Strategy for Temporal Sentence Grounding in Video
Temporal Sentence Grounding in Video (TSGV) is troubled by dataset bias
issue, which is caused by the uneven temporal distribution of the target
moments for samples with similar semantic components in input videos or query
texts. Existing methods resort to utilizing prior knowledge about bias to
artificially break this uneven distribution, which only removes a limited
amount of significant language biases. In this work, we propose the
bias-conflict sample synthesis and adversarial removal debias strategy
(BSSARD), which dynamically generates bias-conflict samples by explicitly
leveraging potentially spurious correlations between single-modality features
and the temporal position of the target moments. Through adversarial training,
its bias generators continuously introduce biases and generate bias-conflict
samples to deceive its grounding model. Meanwhile, the grounding model
continuously eliminates the introduced biases, which requires it to model
multi-modality alignment information. BSSARD will cover most kinds of coupling
relationships and disrupt language and visual biases simultaneously. Extensive
experiments on Charades-CD and ActivityNet-CD demonstrate the promising
debiasing capability of BSSARD. Source codes are available at
https://github.com/qzhb/BSSARD.Comment: accepted by AAAI 202
High-efficient Bloch simulation of magnetic resonance imaging sequences based on deep learning
Objective: Bloch simulation constitutes an essential part of magnetic
resonance imaging (MRI) development. However, even with the graphics processing
unit (GPU) acceleration, the heavy computational load remains a major
challenge, especially in large-scale, high-accuracy simulation scenarios. This
work aims to develop a deep learning-based simulator to accelerate Bloch
simulation. Approach: The simulator model, called Simu-Net, is based on an
end-to-end convolutional neural network and is trained with synthetic data
generated by traditional Bloch simulation. It uses dynamic convolution to fuse
spatial and physical information with different dimensions and introduces
position encoding templates to achieve position-specific labeling and overcome
the receptive field limitation of the convolutional network. Main Results:
Compared with mainstream GPU-based MRI simulation software, Simu-Net
successfully accelerates simulations by hundreds of times in both traditional
and advanced MRI pulse sequences. The accuracy and robustness of the proposed
framework were verified qualitatively and quantitatively. Besides, the trained
Simu-Net was applied to generate sufficient customized training samples for
deep learning-based T2 mapping and comparable results to conventional methods
were obtained in the human brain. Significance: As a proof-of-concept work,
Simu-Net shows the potential to apply deep learning for rapidly approximating
the forward physical process of MRI and may increase the efficiency of Bloch
simulation for optimization of MRI pulse sequences and deep learning-based
methods.Comment: 18 pages, 8 figure
Resolving the model-observation discrepancy in the mesospheric and stratospheric HO_x chemistry
We examine the middle atmospheric odd-hydrogen (HO_x) chemistry by comparing the Aura Microwave Limb Sounder (MLS) OH and HO_2 measurements with a photochemical model simulation. The model underestimates mesospheric OH and HO_2 concentrations if the standard chemical kinetic rates are used, whether the model H_2O and O_3 are constrained with observations or not. To resolve the discrepancies, we adjust the kinetic rate coefficients of three key reactions (O + OH → O_2 + H, OH + HO_2 → H_2O + O_2, and H + O_2 + M → HO_2 + M) and the O2photo absorption cross section at Lyman-α (121.57 nm) using the Bayesian optimal estimation. A much better model-observation agreement can be achieved if the kinetic rate coefficients for H + O_2 + M → HO_2 + M is increased by 134–310%, and the O_2 photo absorption cross section at Lyman-α is reduced by 33–54%, while the kinetic rate coefficients for O + OH → O_2 + H and OH + HO_2 → H_2O + O_2 remain consistent with the current laboratory values. The kinetic rate coefficient for H + O_2 + M → HO_2 + M requires a very large adjustment beyond the uncertainty limits recommended in the NASA Data Evaluation, suggesting the need for future laboratory measurements. An alternative explanation is that the radiative association reaction, H + O_2 → HO_2 + hν, plays a significant role, which has never been measured. Our results demonstrate that high quality satellite observations can be used to constrain photochemical parameters and help improve our understanding of atmospheric chemistry
An Effective Method to Measure Disease Similarity Using Gene and Phenotype Associations
Motivation: In order to create controlled vocabularies for shared use in different biomedical domains, a large number of biomedical ontologies such as Disease Ontology (DO) and Human Phenotype Ontology (HPO), etc., are created in the bioinformatics community. Quantitative measures of the associations among diseases could help researchers gain a deep insight of human diseases, since similar diseases are usually caused by similar molecular origins or have similar phenotypes, which is beneficial to reveal the common attributes of diseases and improve the corresponding diagnoses and treatment plans. Some previous are proposed to measure the disease similarity using a particular biomedical ontology during the past few years, but for a newly discovered disease or a disease with few related genetic information in Disease Ontology (i.e., a disease with less disease-gene associations), these previous approaches usually ignores the joint computation of disease similarity by integrating gene and phenotype associations.Results: In this paper we propose a novel method called GPSim to effectively deduce the semantic similarity of diseases. In particular, GPSim calculates the similarity by jointly utilizing gene, disease and phenotype associations extracted from multiple biomedical ontologies and databases. We also explore the phenotypic factors such as the depth of HPO terms and the number of phenotypic associations that affect the evaluation performance. A final experimental evaluation is carried out to evaluate the performance of GPSim and shows its advantages over previous approaches
- …