11,867 research outputs found
Generalized quantization condition in topological insulator
The topological magnetoelectric effect (TME) is the fundamental quantization
effect for topological insulators in units of the fine structure constant
. In [Phys. Rev. Lett. 105, 166803(2010)], a topological quantization
condition of the TME is given under orthogonal incidence of the optical beam,
in which the wave length of the light or the thickness of the TI film must be
tuned to some commensurate values. This fine tuning is difficult to realize
experimentally. In this article, we give manifestly
covariant expressions for Kerr and Faraday angles at oblique incidence at a
topological insulator thick film. We obtain a generalized quantization
condition independent of material details, and propose a more easily realizable
optical experiment, in which only the incidence angle is tuned, to directly
measure the topological quantization associated with the TME.Comment: 3 figure
Investigation of a Side-polished Fiber MZI and Its Sensing Performance
A novel all-fiber Mach–Zehnder interferometer (MZI), which consists of lateral core fusion splicing of a short section of side-polished single mode fiber (SMF) between two SMFs was proposed and demonstrated. A simple fiber side-polished platform was built to control the side polished depth through a microscope. The sensitivity of the fiber MZI structure to the surrounding refractive index (RI) can be greatly improved with the increase of the side-polished depth, but has no effect on the temperature sensitivity. The sensor with a polished depth of 44.2 μm measured RI sensitivity up to -118.0 nm/RIU (RI unit) in the RI range from 1.333 to 1.387, which agrees well with simulation results by using the beam propagation method (BPM). In addition, the fiber MZI structure also can achieve simultaneous measurement of both RI and temperature. These results show its potential for use in-line fiber type sensing application
Retrieval of phase memory in two independent atomic ensembles by Raman process
In spontaneous Raman process in atomic cell at high gain, both the Stokes
field and the accompanying collective atomic excitation (atomic spin wave) are
coherent. We find that, due to the spontaneous nature of the process, the
phases of the Stokes field and the atomic spin wave change randomly from one
realization to another but are anti-correlated. The phases of the atomic
ensembles are read out via another Raman process at a later time, thus
realizing phase memory in atoms. The observation of phase correlation between
the Stokes field and the collective atomic excitations is an important step
towards macroscopic EPR-type entanglement of continuous variables between light
and atoms
Reformulating Domain Adaptation of Large Language Models as Adapt-Retrieve-Revise
While large language models (LLMs) like GPT-4 have recently demonstrated
astonishing zero-shot capabilities in general domain tasks, they often generate
content with hallucinations in specific domains such as Chinese law, hindering
their application in these areas. This is typically due to the absence of
training data that encompasses such a specific domain, preventing GPT-4 from
acquiring in-domain knowledge. A pressing challenge is that it's not plausible
to continue training LLMs of such scale on in-domain data.
This paper introduces a simple and effective domain adaptation framework for
GPT-4 by reformulating generation as an \textbf{adapt-retrieve-revise} process.
The initial step is to \textbf{adapt} an affordable 7B LLM to the target domain
by continuing learning on in-domain data. When solving a task, we leverage the
adapted LLM to generate a draft answer given a task query. Then, the draft
answer will be used to \textbf{retrieve} supporting evidence candidates from an
external in-domain knowledge base. Finally, the draft answer and retrieved
evidence are concatenated into a whole prompt to let GPT-4 assess the evidence
and \textbf{revise} the draft answer to generate the final answer.
Our proposal combines the advantages of the efficiency of adapting a smaller
7B model with the evidence-assessing capability of GPT-4 and effectively
prevents GPT-4 from generating hallucinatory content. In the zero-shot setting
of four Chinese legal tasks, our method improves accuracy by 33.3\% compared to
the direct generation by GPT-4. When compared to two stronger retrieval-based
baselines, our method outperforms them by 15.4\% and 23.9\%. Our code will be
releasedComment: Under submission to ICLR 202
- …