1,002 research outputs found
Sliding Mode Attitude Maneuver Control for Rigid Spacecraft without Unwinding
In this paper, attitude maneuver control without unwinding phenomenon is
investigated for rigid spacecraft. First, a novel switching function is
constructed by a hyperbolic sine function. It is shown that the spacecraft
system possesses the unwinding-free performance when the system states are on
the sliding surface. Based on the designed switching function, a sliding mode
controller is developed to ensure the robustness of the attitude maneuver
control system. Another essential feature of the presented attitude control law
is that a dynamic parameter is introduced to guarantee the unwinding-free
performance when the system states are outside the sliding surface. The
simulation results demonstrate that the unwinding phenomenon is avoided during
the attitude maneuver of a rigid spacecraft by adopting the constructed
switching function and the proposed attitude control scheme.Comment: 8 Pages, 8 figures. arXiv admin note: text overlap with
arXiv:2004.0700
Anti-Unwinding Sliding Mode Attitude Maneuver Control for Rigid Spacecraft
In this paper, anti-unwinding attitude maneuver control for rigid spacecraft
is considered. First, in order to avoid the unwinding phenomenon when the
system states are restricted to the switching surface, a novel switching
function is constructed by hyperbolic sine functions such that the switching
surface contains two equilibriums. Then, a sliding mode attitude maneuver
controller is designed based on the constructed switching function to ensure
the robustness of the closed-loop attitude maneuver control system to
disturbance. Another important feature of the developed attitude control law is
that a dynamic parameter is introduced to guarantee the anti-unwinding
performance before the system states reach the switching surface. The
simulation results demonstrate that the unwinding problem is settled during
attitude maneuver for rigid spacecraft by adopting the newly constructed
switching function and proposed attitude control scheme.Comment: 8 pages, 8 figure
Di-μ-chlorido-bis(chlorido{2,2′-[3-(1H-imidazol-4-ylmethyl)-3-azapentane-1,5-diyl]diphthalimide}copper(II))
The centrosymmetric dinuclear CuII complex, [Cu2Cl4(C24H21N5O4)2], was synthesized by the reaction of CuCl2·2H2O with the tripodal ligand 2,2′-[3-(1H-imidazol-4-ylmethyl)-3-azapentane-1,5-diyl]diphthalimide (L). Each of the CuII ions is coordinated by two N atoms from the ligand, two bridging Cl atoms and one terminal Cl atom. The CuII coordination can be best be described as a transition state between four- and five-coordination, since one of the bridging Cl atoms has a much longer Cu—Cl bond distance [2.7069 (13) Å] than the other [2.2630 (12) Å]. In addition, the Cu⋯Cu distance is 3.622 (1) Å. The three-dimensional structrure is generated by N—H⋯O, C—H⋯O and C—H⋯Cl hydrogen bonds and π–π interactions [centroid–centroid distances = 3.658 (4) and 4.020 (4) Å]
The Apical Targeting Signal of the P2Y 2 Receptor Is Located in Its First Extracellular Loop
P2Y2 and P2Y4 receptors, which have 52% sequence identity, are both expressed at the apical membrane of Madin-Darby canine kidney cells, but the locations of their apical targeting signals are distinctly different. The targeting signal of the P2Y2 receptor is located between the N terminus and 7TM, whereas that of the P2Y4 receptor is present in its C-terminal tail. To identify the apical targeting signal in the P2Y2 receptor, regions of the P2Y2 receptor were progressively substituted with the corresponding regions of the P2Y4 receptor lacking its targeting signal. Characterization of these chimeras and subsequent mutational analysis revealed that four amino acids (Arg95, Gly96, Asp97, and Leu108) in the first extracellular loop play a major role in apical targeting of the P2Y2 receptor. Mutation of RGD to RGE had no effect on P2Y2 receptor targeting, indicating that receptor-integrin interactions are not involved in apical targeting. P2Y2 receptor mutants were localized in a similar manner in Caco-2 colon epithelial cells. This is the first identification of an extracellular protein-based targeting signal in a seven-transmembrane receptor
DELTA: Pre-train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment
Recent research demonstrates the effectiveness of using pre-trained language
models for legal case retrieval. Most of the existing works focus on improving
the representation ability for the contextualized embedding of the [CLS] token
and calculate relevance using textual semantic similarity. However, in the
legal domain, textual semantic similarity does not always imply that the cases
are relevant enough. Instead, relevance in legal cases primarily depends on the
similarity of key facts that impact the final judgment. Without proper
treatments, the discriminative ability of learned representations could be
limited since legal cases are lengthy and contain numerous non-key facts. To
this end, we introduce DELTA, a discriminative model designed for legal case
retrieval. The basic idea involves pinpointing key facts in legal cases and
pulling the contextualized embedding of the [CLS] token closer to the key facts
while pushing away from the non-key facts, which can warm up the case embedding
space in an unsupervised manner. To be specific, this study brings the word
alignment mechanism to the contextual masked auto-encoder. First, we leverage
shallow decoders to create information bottlenecks, aiming to enhance the
representation ability. Second, we employ the deep decoder to enable
translation between different structures, with the goal of pinpointing key
facts to enhance discriminative ability. Comprehensive experiments conducted on
publicly available legal benchmarks show that our approach can outperform
existing state-of-the-art methods in legal case retrieval. It provides a new
perspective on the in-depth understanding and processing of legal case
documents.Comment: 11 page
SAILER: Structure-aware Pre-trained Language Model for Legal Case Retrieval
Legal case retrieval, which aims to find relevant cases for a query case,
plays a core role in the intelligent legal system. Despite the success that
pre-training has achieved in ad-hoc retrieval tasks, effective pre-training
strategies for legal case retrieval remain to be explored. Compared with
general documents, legal case documents are typically long text sequences with
intrinsic logical structures. However, most existing language models have
difficulty understanding the long-distance dependencies between different
structures. Moreover, in contrast to the general retrieval, the relevance in
the legal domain is sensitive to key legal elements. Even subtle differences in
key legal elements can significantly affect the judgement of relevance.
However, existing pre-trained language models designed for general purposes
have not been equipped to handle legal elements.
To address these issues, in this paper, we propose SAILER, a new
Structure-Aware pre-traIned language model for LEgal case Retrieval. It is
highlighted in the following three aspects: (1) SAILER fully utilizes the
structural information contained in legal case documents and pays more
attention to key legal elements, similar to how legal experts browse legal case
documents. (2) SAILER employs an asymmetric encoder-decoder architecture to
integrate several different pre-training objectives. In this way, rich semantic
information across tasks is encoded into dense vectors. (3) SAILER has powerful
discriminative ability, even without any legal annotation data. It can
distinguish legal cases with different charges accurately. Extensive
experiments over publicly available legal benchmarks demonstrate that our
approach can significantly outperform previous state-of-the-art methods in
legal case retrieval.Comment: 10 pages, accepted by SIGIR 202
BLADE: Enhancing Black-box Large Language Models with Small Domain-Specific Models
Large Language Models (LLMs) like ChatGPT and GPT-4 are versatile and capable
of addressing a diverse range of tasks. However, general LLMs, which are
developed on open-domain data, may lack the domain-specific knowledge essential
for tasks in vertical domains, such as legal, medical, etc. To address this
issue, previous approaches either conduct continuous pre-training with
domain-specific data or employ retrieval augmentation to support general LLMs.
Unfortunately, these strategies are either cost-intensive or unreliable in
practical applications. To this end, we present a novel framework named BLADE,
which enhances Black-box LArge language models with small Domain-spEcific
models. BLADE consists of a black-box LLM and a small domain-specific LM. The
small LM preserves domain-specific knowledge and offers specialized insights,
while the general LLM contributes robust language comprehension and reasoning
capabilities. Specifically, our method involves three steps: 1) pre-training
the small LM with domain-specific data, 2) fine-tuning this model using
knowledge instruction data, and 3) joint Bayesian optimization of the general
LLM and the small LM. Extensive experiments conducted on public legal and
medical benchmarks reveal that BLADE significantly outperforms existing
approaches. This shows the potential of BLADE as an effective and
cost-efficient solution in adapting general LLMs for vertical domains.Comment: 11page
- …