13,418 research outputs found
On the Seesaw Scale in Supersymmetric SO(10) Models
The seesaw mechanism, which is responsible for the description of neutrino
masses and mixing, requires a scale lower than the unification scale. We
propose a new model with spinor superfields playing important roles to generate
this seesaw scale, with special attention paid on the Goldstone mode of the
symmetry breaking.Comment: 15 page
Effect of deposition conditions and thermal annealing on the charge trapping properties of SiN[sub x] films
The density of charge trapping centers in SiNx:H films deposited by plasma enhanced chemical
vapor deposition is investigated as a function of film stoichiometry and postdeposition annealing
treatments. In the as-deposited films, the defect density is observed to increase with an increasing
N/Si ratio x in the range of 0.89–1.45, and to correlate with the N–H bond density. Following the
annealing in the temperature range of 500– 800 °C, the defect density increases for all N/Si ratios,
with the largest increase observed in the most Si rich samples. However, the defect density always
remains highest in the most N rich films. The better charge storage ability suggests the N rich films
are more suitable for the creation of negatively charged nitride films on solar cells.Financial support from
the Australian Research Council LP0883613 is gratefully
acknowledged
Superradiance Lattice
We show that the timed Dicke states of a collection of three-level atoms can
form a tight-binding lattice in momentum space. This lattice, coined the
superradiance lattice (SL), can be constructed based on electromagnetically
induced transparency (EIT). For a one-dimensional SL, we need the coupling
field of the EIT system to be a standing wave. The detuning between the two
components of the standing wave introduces an effective uniform force in
momentum space. The quantum lattice dynamics, such as Bloch oscillations,
Wannier-Stark ladders, Bloch band collapsing and dynamic localization can be
observed in the SL. The two-dimensional SL provides a flexible platform for
Dirac physics in graphene. The SL can be extended to three and higher
dimensions where no analogous real space lattices exist with new physics
waiting to be explored.Comment: 6pages, 4 figure
A hybrid representation based simile component extraction
Simile, a special type of metaphor, can help people to express their ideas more clearly. Simile component extraction is to extract tenors and vehicles from sentences. This task has a realistic significance since it is useful for building cognitive knowledge base. With the development of deep neural networks, researchers begin to apply neural models to component extraction. Simile components should be in cross-domain. According to our observations, words in cross-domain always have different concepts. Thus, concept is important when identifying whether two words are simile components or not. However, existing models do not integrate concept into their models. It is difficult for these models to identify the concept of a word. What’s more, corpus about simile component extraction is limited. There are a number of rare words or unseen words, and the representations of these words are always not proper enough. Exiting models can hardly extract simile components accurately when there are low-frequency words in sentences. To solve these problems, we propose a hybrid representation-based component extraction (HRCE) model. Each word in HRCE is represented in three different levels: word level, concept level and character level. Concept representations (representations in concept level) can help HRCE to identify the words in cross-domain more accurately. Moreover, with the help of character representations (representations in character levels), HRCE can represent the meaning of a word more properly since words are consisted of characters and these characters can partly represent the meaning of words. We conduct experiments to compare the performance between HRCE and existing models. The experiment results show that HRCE significantly outperforms current models
InitialGAN: A Language GAN with Completely Random Initialization
Text generative models trained via Maximum Likelihood Estimation (MLE) suffer
from the notorious exposure bias problem, and Generative Adversarial Networks
(GANs) are shown to have potential to tackle this problem. Existing language
GANs adopt estimators like REINFORCE or continuous relaxations to model word
distributions. The inherent limitations of such estimators lead current models
to rely on pre-training techniques (MLE pre-training or pre-trained
embeddings). Representation modeling methods which are free from those
limitations, however, are seldomly explored because of their poor performance
in previous attempts. Our analyses reveal that invalid sampling methods and
unhealthy gradients are the main contributors to such unsatisfactory
performance. In this work, we present two techniques to tackle these problems:
dropout sampling and fully normalized LSTM. Based on these two techniques, we
propose InitialGAN whose parameters are randomly initialized in full. Besides,
we introduce a new evaluation metric, Least Coverage Rate, to better evaluate
the quality of generated samples. The experimental results demonstrate that
InitialGAN outperforms both MLE and other compared models. To the best of our
knowledge, it is the first time a language GAN can outperform MLE without using
any pre-training techniques
Interconnecting bilayer networks
A typical complex system should be described by a supernetwork or a network
of networks, in which the networks are coupled to some other networks. As the
first step to understanding the complex systems on such more systematic level,
scientists studied interdependent multilayer networks. In this letter, we
introduce a new kind of interdependent multilayer networks, i.e.,
interconnecting networks, for which the component networks are coupled each
other by sharing some common nodes. Based on the empirical investigations, we
revealed a common feature of such interconnecting networks, namely, the
networks with smaller averaged topological differences of the interconnecting
nodes tend to share more nodes. A very simple node sharing mechanism is
proposed to analytically explain the observed feature of the interconnecting
networks.Comment: 9 page
- …