1,622 research outputs found
A high performance surface acoustic wave visible light sensor using novel materials: Bi2S3 nanobelts
Low dimensional Bi2S3 materials are excellent for use in photodetectors with excellent stability and fast response time. In this work, we developed a visible light sensor with good performance based on surface acoustic wave (SAW) devices using Bi2S3 nanobelts as the sensing materials. The SAW delay-line sensor was fabricated on ST-cut quartz with a designed wavelength of 15.8 microns using conventional photolithography techniques. The measured center frequency was 200.02 MHz. The Bi2S3 nanobelts prepared by a facile hydrothermal process were deposited onto SAW sensors by spin-coating. Under irradiation of 625 nm visible light with a power intensity of 170 μW cm−2, the sensor showed a fast and large response with a frequency upshift of 7 kHz within 1 s. The upshift of the frequency of the SAW device is mainly attributed to the mass loading effect caused by the desorption of oxygen from the Bi2S3 nanobelts under visible light radiation
A high performance surface acoustic wave visible light sensor using novel materials: Bi2S3 nanobelts
Low dimensional Bi2S3 materials are excellent for use in photodetectors with excellent stability and fast response time. In this work, we developed a visible light sensor with good performance based on surface acoustic wave (SAW) devices using Bi2S3 nanobelts as the sensing materials. The SAW delay-line sensor was fabricated on ST-cut quartz with a designed wavelength of 15.8 microns using conventional photolithography techniques. The measured center frequency was 200.02 MHz. The Bi2S3 nanobelts prepared by a facile hydrothermal process were deposited onto SAW sensors by spin-coating. Under irradiation of 625 nm visible light with a power intensity of 170 μW cm−2, the sensor showed a fast and large response with a frequency upshift of 7 kHz within 1 s. The upshift of the frequency of the SAW device is mainly attributed to the mass loading effect caused by the desorption of oxygen from the Bi2S3 nanobelts under visible light radiation
Meeting Action Item Detection with Regularized Context Modeling
Meetings are increasingly important for collaborations. Action items in
meeting transcripts are crucial for managing post-meeting to-do tasks, which
usually are summarized laboriously. The Action Item Detection task aims to
automatically detect meeting content associated with action items. However,
datasets manually annotated with action item detection labels are scarce and in
small scale. We construct and release the first Chinese meeting corpus with
manual action item annotations. In addition, we propose a Context-Drop approach
to utilize both local and global contexts by contrastive learning, and achieve
better accuracy and robustness for action item detection. We also propose a
Lightweight Model Ensemble method to exploit different pre-trained models.
Experimental results on our Chinese meeting corpus and the English AMI corpus
demonstrate the effectiveness of the proposed approaches.Comment: 5 pages, 2 figures. Paper accepted to the 2023 IEEE International
Conference on Acoustics, Speech, and Signal Processing (ICASSP 2023), Rhodes,
Greec
Three dimensional spider-web-like superconducting filamentary paths in single crystals
Since the discovery of high temperature superconductivity in F-doped LaFeAsO,
many new iron based superconductors with different structures have been
fabricated2. The observation of superconductivity at about 32 K in KxFe2-ySe2
with the iso-structure of the FeAs-based 122 superconductors was a surprise and
immediately stimulated the interests because the band structure calculation8
predicted the absence of the hole pocket which was supposed to be necessary for
the theoretical picture of S+- pairing. Soon later, it was found that the
material may separate into the insulating antiferromagnetic K2Fe4Se5 phase and
the superconducting phase. It remains unresolved that how these two phases
coexist and what is the parent phase for superconductivity. In this study we
use different quenching processes to produce the target samples with distinct
microstructures, and apply multiple measuring techniques to reveal a close
relationship between the microstructures and the global appearance of
superconductivity. In addition, we clearly illustrate three dimensional
spider-web-like superconducting filamentary paths, and for the first time
propose that the superconducting phase may originate from a state with one
vacancy in every eight Fe-sites with the root8*root10 parallelogram structure.Comment: 22 pages, 7 figure
ShadowDiffusion: When Degradation Prior Meets Diffusion Model for Shadow Removal
Recent deep learning methods have achieved promising results in image shadow
removal. However, their restored images still suffer from unsatisfactory
boundary artifacts, due to the lack of degradation prior embedding and the
deficiency in modeling capacity. Our work addresses these issues by proposing a
unified diffusion framework that integrates both the image and degradation
priors for highly effective shadow removal. In detail, we first propose a
shadow degradation model, which inspires us to build a novel unrolling
diffusion model, dubbed ShandowDiffusion. It remarkably improves the model's
capacity in shadow removal via progressively refining the desired output with
both degradation prior and diffusive generative prior, which by nature can
serve as a new strong baseline for image restoration. Furthermore,
ShadowDiffusion progressively refines the estimated shadow mask as an auxiliary
task of the diffusion generator, which leads to more accurate and robust
shadow-free image generation. We conduct extensive experiments on three popular
public datasets, including ISTD, ISTD+, and SRD, to validate our method's
effectiveness. Compared to the state-of-the-art methods, our model achieves a
significant improvement in terms of PSNR, increasing from 31.69dB to 34.73dB
over SRD dataset
Improving BERT with Hybrid Pooling Network and Drop Mask
Transformer-based pre-trained language models, such as BERT, achieve great
success in various natural language understanding tasks. Prior research found
that BERT captures a rich hierarchy of linguistic information at different
layers. However, the vanilla BERT uses the same self-attention mechanism for
each layer to model the different contextual features. In this paper, we
propose a HybridBERT model which combines self-attention and pooling networks
to encode different contextual features in each layer. Additionally, we propose
a simple DropMask method to address the mismatch between pre-training and
fine-tuning caused by excessive use of special mask tokens during Masked
Language Modeling pre-training. Experiments show that HybridBERT outperforms
BERT in pre-training with lower loss, faster training speed (8% relative),
lower memory cost (13% relative), and also in transfer learning with 1.5%
relative higher accuracies on downstream tasks. Additionally, DropMask improves
accuracies of BERT on downstream tasks across various masking rates.Comment: 7 pages, 2 figure
Improving Long Document Topic Segmentation Models With Enhanced Coherence Modeling
Topic segmentation is critical for obtaining structured documents and
improving downstream tasks such as information retrieval. Due to its ability of
automatically exploring clues of topic shift from abundant labeled data, recent
supervised neural models have greatly promoted the development of long document
topic segmentation, but leaving the deeper relationship between coherence and
topic segmentation underexplored. Therefore, this paper enhances the ability of
supervised models to capture coherence from both logical structure and semantic
similarity perspectives to further improve the topic segmentation performance,
proposing Topic-aware Sentence Structure Prediction (TSSP) and Contrastive
Semantic Similarity Learning (CSSL). Specifically, the TSSP task is proposed to
force the model to comprehend structural information by learning the original
relations between adjacent sentences in a disarrayed document, which is
constructed by jointly disrupting the original document at topic and sentence
levels. Moreover, we utilize inter- and intra-topic information to construct
contrastive samples and design the CSSL objective to ensure that the sentences
representations in the same topic have higher similarity, while those in
different topics are less similar. Extensive experiments show that the
Longformer with our approach significantly outperforms old state-of-the-art
(SOTA) methods. Our approach improve of old SOTA by 3.42 (73.74 -> 77.16)
and reduces by 1.11 points (15.0 -> 13.89) on WIKI-727K and achieves an
average relative reduction of 4.3% on on WikiSection. The average
relative drop of 8.38% on two out-of-domain datasets also demonstrates
the robustness of our approach.Comment: Accepted by EMNLP 2023. Codes is available at
https://github.com/alibaba-damo-academy/SpokenNLP
Ditto: A Simple and Efficient Approach to Improve Sentence Embeddings
Prior studies diagnose the anisotropy problem in sentence representations
from pre-trained language models, e.g., BERT, without fine-tuning. Our analysis
reveals that the sentence embeddings from BERT suffer from a bias towards
uninformative words, limiting the performance in semantic textual similarity
(STS) tasks. To address this bias, we propose a simple and efficient
unsupervised approach, Diagonal Attention Pooling (Ditto), which weights words
with model-based importance estimations and computes the weighted average of
word representations from pre-trained models as sentence embeddings. Ditto can
be easily applied to any pre-trained language model as a postprocessing
operation. Compared to prior sentence embedding approaches, Ditto does not add
parameters nor requires any learning. Empirical evaluations demonstrate that
our proposed Ditto can alleviate the anisotropy problem and improve various
pre-trained models on STS tasks.Comment: 8 pages, accepted by EMNLP 2023 short paper, the source code can be
found at https://github.com/alibaba-damo-academy/SpokenNLP/tree/main/ditt
- …