356 research outputs found
Experimental demonstration of graphene plasmons working close to the near-infrared window
Due to strong mode-confinement, long propagation-distance, and unique
tunability, graphene plasmons have been widely explored in the mid-infrared and
terahertz windows. However, it remains a big challenge to push graphene
plasmons to shorter wavelengths in order to integrate graphene plasmon concepts
with existing mature technologies in the near-infrared region. We investigate
localized graphene plasmons supported by graphene nanodisks and experimentally
demonstrated graphene plasmon working at 2 {\mu}m with the aid of a fully
scalable block copolymer self-assembly method. Our results show a promising way
to promote graphene plasmons for both fundamental studies and potential
applications in the near-infrared window.Comment: 6 pages, 4 figures, a revised versio
Substrate tolerant direct block copolymer nanolithography
Sub-20 nm block copolymer films directly applied on substrates and annealed in vapors of selective solvents significantly simplify the lithographic process.</p
Short-term load forecasting based on CEEMDAN-FE-ISSA-LightGBM model
To address the problems of low load forecasting accuracy due to the strong non-stationarity of electric loads, this paper proposes a short-term load forecasting method based on a combination of the complete ensemble empirical modal decomposition adaptive noise method-fuzzy entropy (CEEMDAN-FE) and the Light Gradient Boosting Machine (LightGBM) optimized by the improved sparrow search algorithm (ISSA). First, the original data are decomposed by the complete ensemble empirical modal decomposition adaptive noise algorithm to obtain the eigenmodal components (IMFs) and residual values. Second, the obtained sequences are entropy reorganized by fuzzy entropy, and thus new sequences are obtained. Third, the new sequences are input into the improved sparrow search algorithm-Light Gradient Boosting Machine model for training and prediction. The improved sparrow search algorithm algorithm can realize parameter optimization of the Light Gradient Boosting Machine model to make the data match the model better, and the predicted values of each grouping of the model output are superimposed to obtain the final predicted values. Finally, the effect is compared by the error function, and the comparison results are used to test the performance of the algorithm. The experiments showed that the smallest evaluation metrics were obtained in Case 1 (MAE = 32.251, MAPE = 0.0114,RMSE = 42.386, R2 = 0.997) and Case2 (MAE = 3.866, MAPE = 0.003, RMSE = 5.940, R2 = 0.997)
Imaging and variability studies of CTA~102 during the 2016 January -ray flare
The -ray bright blazar CTA 102 is studied using imaging (new 15 GHz
and archival 43 GHz Very Long Baseline Array, VLBA data) and time variable
optical flux density, polarization degree and electric vector position angle
(EVPA) spanning between 2015 June 1 and 2016 October 1, covering a prominent
-ray flare during 2016 January. The pc-scale jet indicates expansion
with oscillatory features upto 17 mas. Component proper motions are in the
range 0.04 - 0.33 mas/yr with acceleration upto 1.2 mas followed by a slowing
down beyond 1.5 mas. A jet bulk Lorentz factor 17.5, position angle of
128.3 degrees, inclination angle 6.6 degrees and intrinsic half opening
angle 1.8 degrees are derived from the VLBA data. These inferences are
employed in a helical jet model to infer long term variability in flux density,
polarization degree, EVPA and a rotation of the Stokes Q and U parameters. A
core distance of = 22.9 pc, and a magnetic field
strength at 1 pc and the core location of 1.57 G and 0.07 G respectively are
inferred using the core shift method. The study is useful in the context of
estimating jet parameters and in offering clues to distinguish mechanisms
responsible for variability over different timescales.Comment: 20 pages, 7 figures, 3 tables; accepted for publication in Ap
AiM: Taking Answers in Mind to Correct Chinese Cloze Tests in Educational Applications
To automatically correct handwritten assignments, the traditional approach is
to use an OCR model to recognize characters and compare them to answers. The
OCR model easily gets confused on recognizing handwritten Chinese characters,
and the textual information of the answers is missing during the model
inference. However, teachers always have these answers in mind to review and
correct assignments. In this paper, we focus on the Chinese cloze tests
correction and propose a multimodal approach (named AiM). The encoded
representations of answers interact with the visual information of students'
handwriting. Instead of predicting 'right' or 'wrong', we perform the sequence
labeling on the answer text to infer which answer character differs from the
handwritten content in a fine-grained way. We take samples of OCR datasets as
the positive samples for this task, and develop a negative sample augmentation
method to scale up the training data. Experimental results show that AiM
outperforms OCR-based methods by a large margin. Extensive studies demonstrate
the effectiveness of our multimodal approach.Comment: Accepted to COLING 202
A Study of the Merger History of the Galaxy Group HCG 62 Based on X-Ray Observations and SPH Simulations
We choose the bright compact group HCG 62, which was found to exhibit both
excess X-ray emission and high Fe abundance to the southwest of its core, as an
example to study the impact of mergers on chemical enrichment in the intragroup
medium. We first reanalyze the high-quality Chandra and XMM-Newton archive data
to search for the evidence for additional SN II yields, which is expected as a
direct result of the possible merger-induced starburst. We reveal that, similar
to the Fe abundance, the Mg abundance also shows a high value in both the
innermost region and the southwest substructure, forming a high-abundance
plateau, meanwhile all the SN Ia and SN II yields show rather flat
distributions in in favor of an early enrichment. Then we carry
out a series of idealized numerical simulations to model the collision of two
initially isolated galaxy groups by using the TreePM-SPH GADGET-3 code. We find
that the observed X-ray emission and metal distributions, as well as the
relative positions of the two bright central galaxies with reference to the
X-ray peak, can be well reproduced in a major merger with a mass ratio of 3
when the merger-induced starburst is assumed. The `best-match' snapshot is
pinpointed after the third pericentric passage when the southwest substructure
is formed due to gas sloshing. By following the evolution of the simulated
merging system, we conclude that the effects of such a major merger on chemical
enrichment are mostly restricted within the core region when the final relaxed
state is reached.Comment: Accepted for publication in the Astrophysical Journa
Towards Real-World Writing Assistance: A Chinese Character Checking Benchmark with Faked and Misspelled Characters
Writing assistance is an application closely related to human life and is
also a fundamental Natural Language Processing (NLP) research field. Its aim is
to improve the correctness and quality of input texts, with character checking
being crucial in detecting and correcting wrong characters. From the
perspective of the real world where handwriting occupies the vast majority,
characters that humans get wrong include faked characters (i.e., untrue
characters created due to writing errors) and misspelled characters (i.e., true
characters used incorrectly due to spelling errors). However, existing datasets
and related studies only focus on misspelled characters mainly caused by
phonological or visual confusion, thereby ignoring faked characters which are
more common and difficult. To break through this dilemma, we present
Visual-C, a human-annotated Visual Chinese Character Checking dataset with
faked and misspelled Chinese characters. To the best of our knowledge,
Visual-C is the first real-world visual and the largest human-crafted
dataset for the Chinese character checking scenario. Additionally, we also
propose and evaluate novel baseline methods on Visual-C. Extensive
empirical results and analyses show that Visual-C is high-quality yet
challenging. The Visual-C dataset and the baseline methods will be publicly
available to facilitate further research in the community.Comment: Work in progres
Robust deep semi-supervised learning with label propagation and differential privacy
Semi-supervised learning (SSL) methods provide a powerful tool for utilizing abundant unlabeled data to strengthen standard supervised learning. Traditional graph-based SSL methods prevail in classical SSL problems for their intuitional implementation and effective performance. However, they encounter troubles when applying to image classification followed by modern deep learning, since the diffusion algorithms face the curse of dimensionality. In this study, we propose a simple and efficient SSL method, combining a graph-based SSL paradigm with differential privacy. We aim at developing coherent latent feature space of deep neural networks so that the diffusion algorithm in the latent space can give more precise predictions for unlabeled data. Our approach achieves state-of-the-art performance on the Cifar10, Cifar100, and Mini-imagenet benchmark datasets and obtains an error rate of 18.56% on Cifar10 using only 1% of all labels. Furthermore, our approach inherits the benefits of graph-based SSL methods with a simple training process and can be easily combined with any network architecture
- …