519 research outputs found
Wave Emission From Screw Holes During Heavy-Duty Seamless Lot Flaw Detection by a Large Flaw Detection Vehicle Study on Water Wedge Effect with Probe Wheel
This paper describes the large flaw detection vehicle in the heavy load seamless lot for flaw detection when the screw hole wave loss and the reasons for the water wedge effect of the probe wheel, in order to study the impact mechanism between such wave loss and the water wedge effect, respectively, the coupling water thickness, under pressure, tire pressure and sensitivity parameters and other four factors, the probe wheel water wedge effect performance of the test and dynamic verification. The results show that: the amount of pressure, tire pressure increases, the critical water sliding speed of the probe wheel with the increase, the less likely to lose the wave of the screw hole; with the coupling water thickness increases, the tire critical water sliding speed decreases, the more likely to lose the wave of the screw hole; and in reaching the critical water sliding speed, adjust the sensitivity to lose the effectiveness of the wave of the screw hole. To explore the influence of various factors on the effect of the water wedge of the flaw detection, to provide reference for the application of flaw detection
Concentration Phenomenon of Semiclassical States to Reaction-Diffusion System
We consider the concentration phenomenon of semiclassical states to the
following -component reaction-diffusion system on ,
\begin{align*} \left\{ \begin{aligned} \partial_t u &=\eps^2 \Delta_x u-u-V(x)v
+ \partial_v H(u, v), \partial_t v &=-\eps^2 \Delta_x v+v + V(x)u - \partial_u
H(u, v), \end{aligned} \right. \end{align*} where , ,
\eps>0 is a small parameter, , , and . The system
describing reaction and diffusion of chemicals or morphogens arises in
theoretical physics, chemistry and biology. It is shown in this paper that
ground states to the system concentrate around the local minimum points of
as \eps \to 0^+. Our approach is variational, which is based upon a new
linking-type argument as well as iteration techniques.Comment: 35 page
TESTING FOR COMPLETE PASS-THROUGH OF EX ASS-THROUGH OF EXCHANGE RA ANGE RATE WITHOUT TRADE BARRIERS
The objective of this paper is to test whether the complete pass-through of exchange rate exists when there are almost no transaction costs and in the environment of competitive market. In general, the literature claims that the pass-through of exchange rate is incomplete due to imperfect market, i.e. the presence of transaction costs and imperfect competition. The quasi-experimental case of the food import to Hong Kong from Mainland China is considered in the analysis. The results show that the pass-through of the exchange rate of Chinese RMB against to US dollar to Hong Kong\u27s food import price is complete in long-run equilibrium. Besides, the short-run adjustment significantly contributes to correcting the deviation from the long-run pass-through effect. Moreover, the complete pass-through still exists after accounting for the effects of asymmetry and volatility. Therefore, this paper contributes to the literature by providing empirical evidence that the complete pass-through of exchange rate can exist in the real world
Low-Confidence Samples Mining for Semi-supervised Object Detection
Reliable pseudo-labels from unlabeled data play a key role in semi-supervised
object detection (SSOD). However, the state-of-the-art SSOD methods all rely on
pseudo-labels with high confidence, which ignore valuable pseudo-labels with
lower confidence. Additionally, the insufficient excavation for unlabeled data
results in an excessively low recall rate thus hurting the network training. In
this paper, we propose a novel Low-confidence Samples Mining (LSM) method to
utilize low-confidence pseudo-labels efficiently. Specifically, we develop an
additional pseudo information mining (PIM) branch on account of low-resolution
feature maps to extract reliable large-area instances, the IoUs of which are
higher than small-area ones. Owing to the complementary predictions between PIM
and the main branch, we further design self-distillation (SD) to compensate for
both in a mutually-learning manner. Meanwhile, the extensibility of the above
approaches enables our LSM to apply to Faster-RCNN and Deformable-DETR
respectively. On the MS-COCO benchmark, our method achieves 3.54% mAP
improvement over state-of-the-art methods under 5% labeling ratios
Adaptive Ranking-based Sample Selection for Weakly Supervised Class-imbalanced Text Classification
To obtain a large amount of training labels inexpensively, researchers have
recently adopted the weak supervision (WS) paradigm, which leverages labeling
rules to synthesize training labels rather than using individual annotations to
achieve competitive results for natural language processing (NLP) tasks.
However, data imbalance is often overlooked in applying the WS paradigm,
despite being a common issue in a variety of NLP tasks. To address this
challenge, we propose Adaptive Ranking-based Sample Selection (ARS2), a
model-agnostic framework to alleviate the data imbalance issue in the WS
paradigm. Specifically, it calculates a probabilistic margin score based on the
output of the current model to measure and rank the cleanliness of each data
point. Then, the ranked data are sampled based on both class-wise and
rule-aware ranking. In particular, the two sample strategies corresponds to our
motivations: (1) to train the model with balanced data batches to reduce the
data imbalance issue and (2) to exploit the expertise of each labeling rule for
collecting clean samples. Experiments on four text classification datasets with
four different imbalance ratios show that ARS2 outperformed the
state-of-the-art imbalanced learning and WS methods, leading to a 2%-57.8%
improvement on their F1-score
Faithful and Consistent Graph Neural Network Explanations with Rationale Alignment
Uncovering rationales behind predictions of graph neural networks (GNNs) has
received increasing attention over recent years. Instance-level GNN explanation
aims to discover critical input elements, like nodes or edges, that the target
GNN relies upon for making predictions. %These identified sub-structures can
provide interpretations of GNN's behavior. Though various algorithms are
proposed, most of them formalize this task by searching the minimal subgraph
which can preserve original predictions. However, an inductive bias is
deep-rooted in this framework: several subgraphs can result in the same or
similar outputs as the original graphs. Consequently, they have the danger of
providing spurious explanations and failing to provide consistent explanations.
Applying them to explain weakly-performed GNNs would further amplify these
issues. To address this problem, we theoretically examine the predictions of
GNNs from the causality perspective. Two typical reasons for spurious
explanations are identified: confounding effect of latent variables like
distribution shift, and causal factors distinct from the original input.
Observing that both confounding effects and diverse causal rationales are
encoded in internal representations, \tianxiang{we propose a new explanation
framework with an auxiliary alignment loss, which is theoretically proven to be
optimizing a more faithful explanation objective intrinsically. Concretely for
this alignment loss, a set of different perspectives are explored: anchor-based
alignment, distributional alignment based on Gaussian mixture models,
mutual-information-based alignment, etc. A comprehensive study is conducted
both on the effectiveness of this new framework in terms of explanation
faithfulness/consistency and on the advantages of these variants.Comment: TIST2023. arXiv admin note: substantial text overlap with
arXiv:2205.1373
A Survey on Visual Mamba
State space models (SSM) with selection mechanisms and hardware-aware architectures, namely Mamba, have recently shown significant potential in long-sequence modeling. Since the complexity of transformers’ self-attention mechanism is quadratic with image size, as well as increasing computational demands, researchers are currently exploring how to adapt Mamba for computer vision tasks. This paper is the first comprehensive survey that aims to provide an in-depth analysis of Mamba models within the domain of computer vision. It begins by exploring the foundational concepts contributing to Mamba’s success, including the SSM framework, selection mechanisms, and hardware-aware design. Then, we review these vision Mamba models by categorizing them into foundational models and those enhanced with techniques including convolution, recurrence, and attention to improve their sophistication. Furthermore, we investigate the widespread applications of Mamba in vision tasks, which include their use as a backbone in various levels of vision processing. This encompasses general visual tasks, medical visual tasks (e.g., 2D/3D segmentation, classification, image registration, etc.), and remote sensing visual tasks. In particular, we introduce general visual tasks from two levels: high/mid-level vision (e.g., object detection, segmentation, video classification, etc.) and low-level vision (e.g., image super-resolution, image restoration, visual generation, etc.). We hope this endeavor will spark additional interest within the community to address current challenges and further apply Mamba models in computer vision
- …