710 research outputs found
Two Different Integration Methods for Weather Radar-Based Quantitative Precipitation Estimation
A Human Eye-based Text Color Scheme Generation Method for Image Synthesis
Synthetic data used for scene text detection and recognition tasks have
proven effective. However, there are still two problems: First, the color
schemes used for text coloring in the existing methods are relatively fixed
color key-value pairs learned from real datasets. The dirty data in real
datasets may cause the problem that the colors of text and background are too
similar to be distinguished from each other. Second, the generated texts are
uniformly limited to the same depth of a picture, while there are special cases
in the real world that text may appear across depths. To address these
problems, in this paper we design a novel method to generate color schemes,
which are consistent with the characteristics of human eyes to observe things.
The advantages of our method are as follows: (1) overcomes the color confusion
problem between text and background caused by dirty data; (2) the texts
generated are allowed to appear in most locations of any image, even across
depths; (3) avoids analyzing the depth of background, such that the performance
of our method exceeds the state-of-the-art methods; (4) the speed of generating
images is fast, nearly one picture generated per three milliseconds. The
effectiveness of our method is verified on several public datasets.Comment: Accepted by EITCE 2022, No.QJE77JVOL
Neural Degradation Representation Learning for All-In-One Image Restoration
Existing methods have demonstrated effective performance on a single
degradation type. In practical applications, however, the degradation is often
unknown, and the mismatch between the model and the degradation will result in
a severe performance drop. In this paper, we propose an all-in-one image
restoration network that tackles multiple degradations. Due to the
heterogeneous nature of different types of degradations, it is difficult to
process multiple degradations in a single network. To this end, we propose to
learn a neural degradation representation (NDR) that captures the underlying
characteristics of various degradations. The learned NDR decomposes different
types of degradations adaptively, similar to a neural dictionary that
represents basic degradation components. Subsequently, we develop a degradation
query module and a degradation injection module to effectively recognize and
utilize the specific degradation based on NDR, enabling the all-in-one
restoration ability for multiple degradations. Moreover, we propose a
bidirectional optimization strategy to effectively drive NDR to learn the
degradation representation by optimizing the degradation and restoration
processes alternately. Comprehensive experiments on representative types of
degradations (including noise, haze, rain, and downsampling) demonstrate the
effectiveness and generalization capability of our method
Hybrid ceramics-based cancer theranostics
Cancer is a major threat to human lives. Early detection and precisely targeted therapy/therapies for cancer is the most effective way to reduce the difficulties (e.g., side effects, low survival rate, etc.) in treating cancer. To enable effective cancer detection and treatment, ceramic biomaterials have been intensively and extensively investigated owing to their good biocompatibility, high bioactivity, suitable biodegradability and other distinctive properties that are required for medical devices in oncology. Through hybridization with other materials and loading of imaging agents and therapeutic agents, nanobioceramics can form multifunctional nanodevices to simultaneously provide diagnostic and therapeutic functions for cancer patients, and these nanodevices are known as hybrid ceramics-based cancer theranostics. In this review, the recent developments of hybrid ceramics-based cancer theranostics, which include the key aspects such as their preparation, biological evaluation and applications, are summarized and discussed. The challenges and future perspectives for the clinical translation of hybrid ceramics-based cancer theranostics are also discussed. It is believed that the potential of hybrid ceramic nanoparticles as cancer theranostics is high and that the future of these theranostics is bright despite the difficulties along the way for their clinical translation
Using optical code-division multiple-access techniques in Michelson interferometer vibration sensor networks
This study proposes a spectral-amplitude-coding optical code-division multiple-access (SAC-OCDMA) framework for accessing the vibration frequency of a test object by using a Michelson interferometer vibration sensor (MIVS). Each sensor node possesses an individual signature codeword, and liquid crystal spatial light modulator (LC-SLM) encoders/decoders (codecs) are adopted to provide excellent orthogonal properties in the frequency domain. The proposed LC-SLM-based OCDMA system mitigates multiple access interference among all sensor nodes. When optical beams strike and are reflected by the object, the sensing interferometer becomes sensitive to external physical parameters such as temperature, strain, and vibration. The MIVS includes a Michelson interferometer placed at a specific distance from the test object on a designed vibration platform. A balanced photodetector (BPD) was used to convert the light output of the LC-SLM decoders into electrical signals, and a digitizing oscilloscope was used to Fourier transform the BPD electrical signal output, thereby yielding the vibration frequency of the test object. The results showed that the proposed sensor network with an interferometer can be used as a distributed highly sensitive sensor to obtain mechanical values. This study provides a new optical sensor network for current vibration frequency measurements
Is the Long-Term Outcome of PCI or CABG in Insulin-Treated Diabetic Patients Really Worse Than Non-Insulin-Treated Ones?
SurroundDepth: Entangling Surrounding Views for Self-Supervised Multi-Camera Depth Estimation
Depth estimation from images serves as the fundamental step of 3D perception
for autonomous driving and is an economical alternative to expensive depth
sensors like LiDAR. The temporal photometric constraints enables
self-supervised depth estimation without labels, further facilitating its
application. However, most existing methods predict the depth solely based on
each monocular image and ignore the correlations among multiple surrounding
cameras, which are typically available for modern self-driving vehicles. In
this paper, we propose a SurroundDepth method to incorporate the information
from multiple surrounding views to predict depth maps across cameras.
Specifically, we employ a joint network to process all the surrounding views
and propose a cross-view transformer to effectively fuse the information from
multiple views. We apply cross-view self-attention to efficiently enable the
global interactions between multi-camera feature maps. Different from
self-supervised monocular depth estimation, we are able to predict real-world
scales given multi-camera extrinsic matrices. To achieve this goal, we adopt
the two-frame structure-from-motion to extract scale-aware pseudo depths to
pretrain the models. Further, instead of predicting the ego-motion of each
individual camera, we estimate a universal ego-motion of the vehicle and
transfer it to each view to achieve multi-view ego-motion consistency. In
experiments, our method achieves the state-of-the-art performance on the
challenging multi-camera depth estimation datasets DDAD and nuScenes.Comment: Accepted to CoRL 2022. Project page:
https://surrounddepth.ivg-research.xyz Code:
https://github.com/weiyithu/SurroundDept
- …