109 research outputs found
Testing the martingale difference hypothesis in high dimension
In this paper, we consider testing the martingale difference hypothesis for
high-dimensional time series. Our test is built on the sum of squares of the
element-wise max-norm of the proposed matrix-valued nonlinear dependence
measure at different lags. To conduct the inference, we approximate the null
distribution of our test statistic by Gaussian approximation and provide a
simulation-based approach to generate critical values. The asymptotic behavior
of the test statistic under the alternative is also studied. Our approach is
nonparametric as the null hypothesis only assumes the time series concerned is
martingale difference without specifying any parametric forms of its
conditional moments. As an advantage of Gaussian approximation, our test is
robust to the cross-series dependence of unknown magnitude. To the best of our
knowledge, this is the first valid test for the martingale difference
hypothesis that not only allows for large dimension but also captures nonlinear
serial dependence. The practical usefulness of our test is illustrated via
simulation and a real data analysis. The test is implemented in a user-friendly
R-function
Statistical inference for high-dimensional spectral density matrix
The spectral density matrix is a fundamental object of interest in time
series analysis, and it encodes both contemporary and dynamic linear
relationships between component processes of the multivariate system. In this
paper we develop novel inference procedures for the spectral density matrix in
the high-dimensional setting. Specifically, we introduce a new global testing
procedure to test the nullity of the cross-spectral density for a given set of
frequencies and across pairs of component indices. For the first time, both
Gaussian approximation and parametric bootstrap methodologies are employed to
conduct inference for a high-dimensional parameter formulated in the frequency
domain, and new technical tools are developed to provide asymptotic guarantees
of the size accuracy and power for global testing. We further propose a
multiple testing procedure for simultaneously testing the nullity of the
cross-spectral density at a given set of frequencies. The method is shown to
control the false discovery rate. Both numerical simulations and a real data
illustration demonstrate the usefulness of the proposed testing methods
Breaking Modality Disparity: Harmonized Representation for Infrared and Visible Image Registration
Since the differences in viewing range, resolution and relative position, the
multi-modality sensing module composed of infrared and visible cameras needs to
be registered so as to have more accurate scene perception. In practice, manual
calibration-based registration is the most widely used process, and it is
regularly calibrated to maintain accuracy, which is time-consuming and
labor-intensive. To cope with these problems, we propose a scene-adaptive
infrared and visible image registration. Specifically, in regard of the
discrepancy between multi-modality images, an invertible translation process is
developed to establish a modality-invariant domain, which comprehensively
embraces the feature intensity and distribution of both infrared and visible
modalities. We employ homography to simulate the deformation between different
planes and develop a hierarchical framework to rectify the deformation inferred
from the proposed latent representation in a coarse-to-fine manner. For that,
the advanced perception ability coupled with the residual estimation conducive
to the regression of sparse offsets, and the alternate correlation search
facilitates a more accurate correspondence matching. Moreover, we propose the
first ground truth available misaligned infrared and visible image dataset,
involving three synthetic sets and one real-world set. Extensive experiments
validate the effectiveness of the proposed method against the
state-of-the-arts, advancing the subsequent applications.Comment: 10 pages, 11 figure
Dual Adversarial Resilience for Collaborating Robust Underwater Image Enhancement and Perception
Due to the uneven scattering and absorption of different light wavelengths in
aquatic environments, underwater images suffer from low visibility and clear
color deviations. With the advancement of autonomous underwater vehicles,
extensive research has been conducted on learning-based underwater enhancement
algorithms. These works can generate visually pleasing enhanced images and
mitigate the adverse effects of degraded images on subsequent perception tasks.
However, learning-based methods are susceptible to the inherent fragility of
adversarial attacks, causing significant disruption in results. In this work,
we introduce a collaborative adversarial resilience network, dubbed CARNet, for
underwater image enhancement and subsequent detection tasks. Concretely, we
first introduce an invertible network with strong perturbation-perceptual
abilities to isolate attacks from underwater images, preventing interference
with image enhancement and perceptual tasks. Furthermore, we propose a
synchronized attack training strategy with both visual-driven and
perception-driven attacks enabling the network to discern and remove various
types of attacks. Additionally, we incorporate an attack pattern discriminator
to heighten the robustness of the network against different attacks. Extensive
experiments demonstrate that the proposed method outputs visually appealing
enhancement images and perform averagely 6.71% higher detection mAP than
state-of-the-art methods.Comment: 9 pages, 9 figure
WaterFlow: Heuristic Normalizing Flow for Underwater Image Enhancement and Beyond
Underwater images suffer from light refraction and absorption, which impairs
visibility and interferes the subsequent applications. Existing underwater
image enhancement methods mainly focus on image quality improvement, ignoring
the effect on practice. To balance the visual quality and application, we
propose a heuristic normalizing flow for detection-driven underwater image
enhancement, dubbed WaterFlow. Specifically, we first develop an invertible
mapping to achieve the translation between the degraded image and its clear
counterpart. Considering the differentiability and interpretability, we
incorporate the heuristic prior into the data-driven mapping procedure, where
the ambient light and medium transmission coefficient benefit credible
generation. Furthermore, we introduce a detection perception module to transmit
the implicit semantic guidance into the enhancement procedure, where the
enhanced images hold more detection-favorable features and are able to promote
the detection performance. Extensive experiments prove the superiority of our
WaterFlow, against state-of-the-art methods quantitatively and qualitatively.Comment: 10 pages, 13 figure
Repression of slow myosin heavy chain 2 gene expression in fast skeletal muscle fibers by muscarinic acetylcholine receptor and Gαq signaling
Gene expression in skeletal muscle fibers is regulated by innervation and intrinsic fiber properties. To determine the mechanism of repression of slow MyHC2 expression in innervated fast pectoralis major (PM) fibers, we investigated the function of the muscarinic acetylcholine receptor (mAchR) and Gαq. Both mAchR and Gαq are abundant in medial adductor (MA) and PM fibers, and mAchR and Gαq interact in these fibers. Whereas innervation of PM fibers was insufficient to induce slow MyHC2 expression, inhibition of mAchR activity with atropine in innervated PM fibers induced slow MyHC2 expression. Increased Gαq activity repressed slow MyHC2 expression to nondetectable levels in innervated MA fibers. Reduced mAchR activity decreased PKC activity in PM fibers, and increased Gαq activity increased PKC activity in PM and MA fibers. Decreased PKC activity in atropine-treated innervated PM fibers correlated with slow MyHC2 expression. These data suggest that slow MyHC2 repression in innervated fast PM fibers is mediated by cell signaling involving mAchRs, Gαq, and PKC
Holistic Dynamic Frequency Transformer for Image Fusion and Exposure Correction
The correction of exposure-related issues is a pivotal component in enhancing
the quality of images, offering substantial implications for various computer
vision tasks. Historically, most methodologies have predominantly utilized
spatial domain recovery, offering limited consideration to the potentialities
of the frequency domain. Additionally, there has been a lack of a unified
perspective towards low-light enhancement, exposure correction, and
multi-exposure fusion, complicating and impeding the optimization of image
processing. In response to these challenges, this paper proposes a novel
methodology that leverages the frequency domain to improve and unify the
handling of exposure correction tasks. Our method introduces Holistic Frequency
Attention and Dynamic Frequency Feed-Forward Network, which replace
conventional correlation computation in the spatial-domain. They form a
foundational building block that facilitates a U-shaped Holistic Dynamic
Frequency Transformer as a filter to extract global information and dynamically
select important frequency bands for image restoration. Complementing this, we
employ a Laplacian pyramid to decompose images into distinct frequency bands,
followed by multiple restorers, each tuned to recover specific frequency-band
information. The pyramid fusion allows a more detailed and nuanced image
restoration process. Ultimately, our structure unifies the three tasks of
low-light enhancement, exposure correction, and multi-exposure fusion, enabling
comprehensive treatment of all classical exposure errors. Benchmarking on
mainstream datasets for these tasks, our proposed method achieves
state-of-the-art results, paving the way for more sophisticated and unified
solutions in exposure correction
Fearless Luminance Adaptation: A Macro-Micro-Hierarchical Transformer for Exposure Correction
Photographs taken with less-than-ideal exposure settings often display poor
visual quality. Since the correction procedures vary significantly, it is
difficult for a single neural network to handle all exposure problems.
Moreover, the inherent limitations of convolutions, hinder the models ability
to restore faithful color or details on extremely over-/under- exposed regions.
To overcome these limitations, we propose a Macro-Micro-Hierarchical
transformer, which consists of a macro attention to capture long-range
dependencies, a micro attention to extract local features, and a hierarchical
structure for coarse-to-fine correction. In specific, the complementary
macro-micro attention designs enhance locality while allowing global
interactions. The hierarchical structure enables the network to correct
exposure errors of different scales layer by layer. Furthermore, we propose a
contrast constraint and couple it seamlessly in the loss function, where the
corrected image is pulled towards the positive sample and pushed away from the
dynamically generated negative samples. Thus the remaining color distortion and
loss of detail can be removed. We also extend our method as an image enhancer
for low-light face recognition and low-light semantic segmentation. Experiments
demonstrate that our approach obtains more attractive results than
state-of-the-art methods quantitatively and qualitatively.Comment: Accepted by ACM MM 202
- …