319 research outputs found
Fast Local Tone Mapping, Summed-Area Tables and Mesopic Vision Simulation
広島大学(Hiroshima University)博士(工学)Engineeringdoctora
パーティクルシステムを用いた噴水のアニメーション
平成17年度電気・情報関連学会中国支部第56回連合大会資料 福山大学, 広島 (2005 10
Hepatocyte growth factor gene therapy reduces ventricular arrhythmia in animal models of myocardial ischemia.
It was recently reported that gene therapy using hepatocyte growth factor (HGF) has the potential to preserve cardiac function after myocardial ischemia. We speculated that this HGF gene therapy could also prevent ventricular arrhythmia. To investigate this possibility, we examined the antiarrhythmic effect of HGF gene therapy in rat acute and old myocardial infarction models. Myocardial ischemia was induced by ligation of the left descending coronary artery. Hemagglutinating virus of Japan (HVJ)-coated liposome containing HGF genes were injected directly into the myocardium fourteen days before programmed pacing. Ventricular fibrillation (VF)was induced by programmed pacing. The VF duration was reduced and the VF threshold increased after HGF gene therapy ( p< 0.01). Histological analyses revealed that the number of vessels in the ischemic border zone was greatly increased after HGF gene injection. These findings revealed that HGF gene therapy has an anti-arrhythmic effect after myocardial ischemia.</p
Which visual questions are difficult to answer? Analysis with Entropy of Answer Distributions
We propose a novel approach to identify the difficulty of visual questions
for Visual Question Answering (VQA) without direct supervision or annotations
to the difficulty. Prior works have considered the diversity of ground-truth
answers of human annotators. In contrast, we analyze the difficulty of visual
questions based on the behavior of multiple different VQA models. We propose to
cluster the entropy values of the predicted answer distributions obtained by
three different models: a baseline method that takes as input images and
questions, and two variants that take as input images only and questions only.
We use a simple k-means to cluster the visual questions of the VQA v2
validation set. Then we use state-of-the-art methods to determine the accuracy
and the entropy of the answer distributions for each cluster. A benefit of the
proposed method is that no annotation of the difficulty is required, because
the accuracy of each cluster reflects the difficulty of visual questions that
belong to it. Our approach can identify clusters of difficult visual questions
that are not answered correctly by state-of-the-art methods. Detailed analysis
on the VQA v2 dataset reveals that 1) all methods show poor performances on the
most difficult cluster (about 10% accuracy), 2) as the cluster difficulty
increases, the answers predicted by the different methods begin to differ, and
3) the values of cluster entropy are highly correlated with the cluster
accuracy. We show that our approach has the advantage of being able to assess
the difficulty of visual questions without ground-truth (i.e. the test set of
VQA v2) by assigning them to one of the clusters. We expect that this can
stimulate the development of novel directions of research and new algorithms.
Clustering results are available online at https://github.com/tttamaki/vqd .Comment: accepted by IEEE access available at
https://doi.org/10.1109/ACCESS.2020.3022063 as "An Entropy Clustering
Approach for Assessing Visual Question Difficulty
- …