338 research outputs found
Interface crack between dissimilar one-dimensional hexagonal quasicrystals with piezoelectric effect
Redox Flow Batteries: Fundamentals and Applications
A redox flow battery is an electrochemical energy storage device that converts chemical energy into electrical energy through reversible oxidation and reduction of working fluids. The concept was initially conceived in 1970s. Clean and sustainable energy supplied from renewable sources in future requires efficient, reliable and cost‐effective energy storage systems. Due to the flexibility in system design and competence in scaling cost, redox flow batteries are promising in stationary storage of energy from intermittent sources such as solar and wind. This chapter covers basic principles of electrochemistry in redox flow batteries and provides an overview of status and future challenges. Recent progress in redox couples, membranes and electrode materials will be discussed. New demonstration and commercial development will be addressed
Giant negative magnetoresistance of spin polarons in magnetic semiconductors–chromium-doped Ti2O3 thin films
Epitaxial Cr-doped Ti2O3 films show giant negative magnetoresistance up to –365% at 2 K. The resistivity of the doped samples follows the behavior expected of spin (magnetic) polarons at low temperature. Namely, rho= rho0 exp(T0/T)p, where p = 0.5 in zero field. A large applied field quenches the spin polarons and p is reduced to 0.25 expected for lattice polarons. The formation of spin polarons is an indication of strong exchange coupling between the magnetic ions and holes in the system
BALF: Simple and Efficient Blur Aware Local Feature Detector
Local feature detection is a key ingredient of many image processing and
computer vision applications, such as visual odometry and localization. Most
existing algorithms focus on feature detection from a sharp image. They would
thus have degraded performance once the image is blurred, which could happen
easily under low-lighting conditions. To address this issue, we propose a
simple yet both efficient and effective keypoint detection method that is able
to accurately localize the salient keypoints in a blurred image. Our method
takes advantages of a novel multi-layer perceptron (MLP) based architecture
that significantly improve the detection repeatability for a blurred image. The
network is also light-weight and able to run in real-time, which enables its
deployment for time-constrained applications. Extensive experimental results
demonstrate that our detector is able to improve the detection repeatability
with blurred images, while keeping comparable performance as existing
state-of-the-art detectors for sharp images
Spatial Self-Distillation for Object Detection with Inaccurate Bounding Boxes
Object detection via inaccurate bounding boxes supervision has boosted a
broad interest due to the expensive high-quality annotation data or the
occasional inevitability of low annotation quality (\eg tiny objects). The
previous works usually utilize multiple instance learning (MIL), which highly
depends on category information, to select and refine a low-quality box. Those
methods suffer from object drift, group prediction and part domination problems
without exploring spatial information. In this paper, we heuristically propose
a \textbf{Spatial Self-Distillation based Object Detector (SSD-Det)} to mine
spatial information to refine the inaccurate box in a self-distillation
fashion. SSD-Det utilizes a Spatial Position Self-Distillation \textbf{(SPSD)}
module to exploit spatial information and an interactive structure to combine
spatial information and category information, thus constructing a high-quality
proposal bag. To further improve the selection procedure, a Spatial Identity
Self-Distillation \textbf{(SISD)} module is introduced in SSD-Det to obtain
spatial confidence to help select the best proposals. Experiments on MS-COCO
and VOC datasets with noisy box annotation verify our method's effectiveness
and achieve state-of-the-art performance. The code is available at
https://github.com/ucas-vg/PointTinyBenchmark/tree/SSD-Det.Comment: accepted by ICCV 202
The Structure of Coronal Mass Ejections Recorded by the K-Coronagraph at Mauna Loa Solar Observatory
Previous survey studies reported that coronal mass ejections (CMEs) can
exhibit various structures in white-light coronagraphs, and 30\% of them
have the typical three-part feature in the high corona (e.g., 2--6 ),
which has been taken as the prototypical structure of CMEs. It is widely
accepted that CMEs result from eruption of magnetic flux ropes (MFRs), and the
three-part structure can be understood easily by means of the MFR eruption. It
is interesting and significant to answer why only 30\% of CMEs have the
three-part feature in previous studies. Here we conduct a synthesis of the CME
structure in the field of view (FOV) of K-Coronagraph (1.05--3 ). In
total, 369 CMEs are observed from 2013 September to 2022 November. After
inspecting the CMEs one by one through joint observations of the AIA,
K-Coronagraph and LASCO/C2, we find 71 events according to the criteria: 1)
limb event; 2) normal CME, i.e., angular width 30; 3)
K-Coronagraph caught the early eruption stage. All (or more than 90\%
considering several ambiguous events) of the 71 CMEs exhibit the three-part
feature in the FOV of K-Coronagraph, while only 30--40\% have the feature in
the C2 FOV (2--6 ). For the first time, our studies show that
90--100\% and 30--40\% of normal CMEs possess the three-part structure in the
low and high corona, respectively, which demonstrates that many CMEs can lose
the three-part feature during their early evolutions, and strongly supports
that most (if not all) CMEs have the MFR structures.Comment: 10 pages, 4 figures, accepted for publication in ApJ
SyreaNet: A Physically Guided Underwater Image Enhancement Framework Integrating Synthetic and Real Images
Underwater image enhancement (UIE) is vital for high-level vision-related
underwater tasks. Although learning-based UIE methods have made remarkable
achievements in recent years, it's still challenging for them to consistently
deal with various underwater conditions, which could be caused by: 1) the use
of the simplified atmospheric image formation model in UIE may result in severe
errors; 2) the network trained solely with synthetic images might have
difficulty in generalizing well to real underwater images. In this work, we,
for the first time, propose a framework \textit{SyreaNet} for UIE that
integrates both synthetic and real data under the guidance of the revised
underwater image formation model and novel domain adaptation (DA) strategies.
First, an underwater image synthesis module based on the revised model is
proposed. Then, a physically guided disentangled network is designed to predict
the clear images by combining both synthetic and real underwater images. The
intra- and inter-domain gaps are abridged by fully exchanging the domain
knowledge. Extensive experiments demonstrate the superiority of our framework
over other state-of-the-art (SOTA) learning-based UIE methods qualitatively and
quantitatively. The code and dataset are publicly available at
https://github.com/RockWenJJ/SyreaNet.git.Comment: 7 pages; 10 figure
- …