170 research outputs found
スリーブ内で半凝固スラリーを生成するアルミニウム合金ダイカスト法の開発
要約のみTohoku University安斎浩一課
Multiple Consecutive Recapture of Rigid Nanoparticles Using a Solid-State Nanopore Sensor
Solid‐state nanopore sensors have been used to measure the size of a nanoparticle by applying a resistive pulse sensing technique. Previously, the size distribution of the population pool could be investigated utilizing data from a single translocation, however, the accuracy of the distribution is limited due to the lack of repeated data. In this study, we characterized polystyrene nanobeads utilizing single particle recapture techniques, which provide a better statistical estimate of the size distribution than that of single sampling techniques. The pulses and translocation times of two different sized nanobeads (80 nm and 125 nm in diameter) were acquired repeatedly as nanobeads were recaptured multiple times using an automated system controlled by custom‐built scripts. The drift‐diffusion equation was solved to find good estimates for the configuration parameters of the recapture system. The results of the experiment indicated enhancement of measurement precision and accuracy as nanobeads were recaptured multiple times. Reciprocity of the recapture and capacitive effects in solid state nanopores are discussed. Our findings suggest that solid‐state nanopores and an automated recapture system can also be applied to soft nanoparticles, such as liposomes, exosomes, or viruses, to analyze their mechanical properties in single‐particle resolution
Improving Scene Text Recognition for Character-Level Long-Tailed Distribution
Despite the recent remarkable improvements in scene text recognition (STR),
the majority of the studies focused mainly on the English language, which only
includes few number of characters. However, STR models show a large performance
degradation on languages with a numerous number of characters (e.g., Chinese
and Korean), especially on characters that rarely appear due to the long-tailed
distribution of characters in such languages. To address such an issue, we
conducted an empirical analysis using synthetic datasets with different
character-level distributions (e.g., balanced and long-tailed distributions).
While increasing a substantial number of tail classes without considering the
context helps the model to correctly recognize characters individually,
training with such a synthetic dataset interferes the model with learning the
contextual information (i.e., relation among characters), which is also
important for predicting the whole word. Based on this motivation, we propose a
novel Context-Aware and Free Experts Network (CAFE-Net) using two experts: 1)
context-aware expert learns the contextual representation trained with a
long-tailed dataset composed of common words used in everyday life and 2)
context-free expert focuses on correctly predicting individual characters by
utilizing a dataset with a balanced number of characters. By training two
experts to focus on learning contextual and visual representations,
respectively, we propose a novel confidence ensemble method to compensate the
limitation of each expert. Through the experiments, we demonstrate that
CAFE-Net improves the STR performance on languages containing numerous number
of characters. Moreover, we show that CAFE-Net is easily applicable to various
STR models.Comment: 17 page
Towards Open-Set Test-Time Adaptation Utilizing the Wisdom of Crowds in Entropy Minimization
Test-time adaptation (TTA) methods, which generally rely on the model's
predictions (e.g., entropy minimization) to adapt the source pretrained model
to the unlabeled target domain, suffer from noisy signals originating from 1)
incorrect or 2) open-set predictions. Long-term stable adaptation is hampered
by such noisy signals, so training models without such error accumulation is
crucial for practical TTA. To address these issues, including open-set TTA, we
propose a simple yet effective sample selection method inspired by the
following crucial empirical finding. While entropy minimization compels the
model to increase the probability of its predicted label (i.e., confidence
values), we found that noisy samples rather show decreased confidence values.
To be more specific, entropy minimization attempts to raise the confidence
values of an individual sample's prediction, but individual confidence values
may rise or fall due to the influence of signals from numerous other
predictions (i.e., wisdom of crowds). Due to this fact, noisy signals
misaligned with such 'wisdom of crowds', generally found in the correct
signals, fail to raise the individual confidence values of wrong samples,
despite attempts to increase them. Based on such findings, we filter out the
samples whose confidence values are lower in the adapted model than in the
original model, as they are likely to be noisy. Our method is widely applicable
to existing TTA methods and improves their long-term adaptation performance in
both image classification (e.g., 49.4% reduced error rates with TENT) and
semantic segmentation (e.g., 11.7% gain in mIoU with TENT).Comment: Accepted to ICCV 202
CAFA: Class-Aware Feature Alignment for Test-Time Adaptation
Despite recent advancements in deep learning, deep neural networks continue
to suffer from performance degradation when applied to new data that differs
from training data. Test-time adaptation (TTA) aims to address this challenge
by adapting a model to unlabeled data at test time. TTA can be applied to
pretrained networks without modifying their training procedures, enabling them
to utilize a well-formed source distribution for adaptation. One possible
approach is to align the representation space of test samples to the source
distribution (\textit{i.e.,} feature alignment). However, performing feature
alignment in TTA is especially challenging in that access to labeled source
data is restricted during adaptation. That is, a model does not have a chance
to learn test data in a class-discriminative manner, which was feasible in
other adaptation tasks (\textit{e.g.,} unsupervised domain adaptation) via
supervised losses on the source data. Based on this observation, we propose a
simple yet effective feature alignment loss, termed as Class-Aware Feature
Alignment (CAFA), which simultaneously 1) encourages a model to learn target
representations in a class-discriminative manner and 2) effectively mitigates
the distribution shifts at test time. Our method does not require any
hyper-parameters or additional losses, which are required in previous
approaches. We conduct extensive experiments on 6 different datasets and show
our proposed method consistently outperforms existing baselines
Deep Imbalanced Time-series Forecasting via Local Discrepancy Density
Time-series forecasting models often encounter abrupt changes in a given
period of time which generally occur due to unexpected or unknown events.
Despite their scarce occurrences in the training set, abrupt changes incur loss
that significantly contributes to the total loss. Therefore, they act as noisy
training samples and prevent the model from learning generalizable patterns,
namely the normal states. Based on our findings, we propose a reweighting
framework that down-weights the losses incurred by abrupt changes and
up-weights those by normal states. For the reweighting framework, we first
define a measurement termed Local Discrepancy (LD) which measures the degree of
abruptness of a change in a given period of time. Since a training set is
mostly composed of normal states, we then consider how frequently the temporal
changes appear in the training set based on LD. Our reweighting framework is
applicable to existing time-series forecasting models regardless of the
architectures. Through extensive experiments on 12 time-series forecasting
models over eight datasets with various in-output sequence lengths, we
demonstrate that applying our reweighting framework reduces MSE by 10.1% on
average and by up to 18.6% in the state-of-the-art model.Comment: Accepted at European Conference on Machine Learning and Principles
and Practice of Knowledge Discovery in Databases (ECML/PKDD) 202
- …