6,031 research outputs found
Well-posedness of the fractional Ginzburg-Landau equation
In this paper, we investigate the well-posedness of the real fractional Ginzburg-Landau equation in several different function spaces, which have been used to deal with the Burgers' equation, the semilinear heat equation, the Navier-Stokes equations, etc. The long time asymptotic behavior of the nonnegative global solutions is also studied in details
Combining radiofrequency ablation and ethanol injection may achieve comparable long-term outcomes in larger hepatocellular carcinoma (3.1–4 cm) and in high-risk locations
AbstractRadiofrequency ablation (RFA) is more effective for hepatocellular carcinoma (HCC) < 3 cm. Combining percutaneous ethanol injection and RFA for HCC can increase ablation; however, the long-term outcome remains unknown. The aim of this study was to compare long-term outcomes between patients with HCC of 2–3 cm versus 3.1–4 cm and in high-risk versus non-high-risk locations after combination therapy. The primary endpoint was overall survival and the secondary endpoint was local tumor progression (LTP). Fifty-four consecutive patients with 72 tumors were enrolled. Twenty-two (30.6%) tumors and 60 (83.3%) tumors were of 3.1–4 cm and in high-risk locations, respectively. Primary technique effectiveness was comparable between HCC of 2–3 cm versus 3.1–4 cm (98% vs. 95.5%, p = 0.521), and HCC in non-high risk and high-risk locations (100% vs. 96.7%, p = 1.000). The cumulative survival rates at 1 year, 3 years, and 5 years were 90.3%, 78.9%, and 60.3%, respectively, in patients with HCC of 2–3 cm; 95.0%, 84.4%, and 69.3% in HCC of 3.1–4.0 cm (p = 0.397); 90.0%, 71.1%, and 71.1% in patients with HCC in non-high-risk locations; and 92.7%, 81.6%, and 65.4% in high-risk locations (p = 0.979). The cumulative LTP rates at 1 year, 3 years, and 5 years were 10.2%, 32.6%, and 32.6%, respectively, in all HCCs; 12.6%, 33.9%, and 33.9% in HCC of 2–3 cm; 4.8%, 29.5%, and 29.5% in HCC of 3.1–4 cm (p = 0.616); 16.7%, 50.0%, and 50.0% in patients with HCC in non-high-risk locations; and 8.8%, 29.9%, and 29.9% in patients with HCC in high-risk locations (p = 0.283). The cumulative survival and LTP rates were not significantly different among the various subgroups. Combining RFA and percutaneous ethanol injection achieved comparable long-term outcomes in HCCs of 2–3 cm versus 3.1–4.0 cm and in high-risk versus non-high-risk locations. A randomized controlled or cohort studies with larger sample size are warranted
Mergers and acquisitions matching for performance improvement: a DEA-based approach
This article proposes a new data envelopment analysis (DEA)-based
approach to deal with mergers and acquisitions (M&As) matching.
To derive reliable matching degrees between bidder and target
firms, we consider both technical efficiency and scale efficiency.
Specifically, an inverse DEA model is developed for measuring the
technical efficiency, while a conventional DEA model is employed to
identify the return of scale of the merged decision-making units
(DMUs). Then, an optimization model is formulated to generate
matching results to improve DMUs’ performance. An empirical study
of M&As matching Turkish energy firms is examined to illustrate the
proposed approach. This study shows that both technical efficiency
and scale efficiency have impacts on M&As matching practices
Evolution of pairing from weak to strong coupling on a honeycomb lattice
We study the evolution of the pairing from weak to strong coupling on a
honeycomb lattice by Quantum Monte Carlo. We show numerical evidence of the
BCS-BEC crossover as the coupling strength increases on a honeycomb lattice
with small fermi surface by measuring a wide range of observables: double
occupancy, spin susceptibility, local pair correlation, and kinetic energy.
Although at low energy, the model sustains Dirac fermions, we do not find
significant qualitative difference in the BCS-BEC crossover as compared to
those with an extended Fermi surface, except at weak coupling, BCS regime.Comment: 5 page
Segatron: Segment-Aware Transformer for Language Modeling and Understanding
Transformers are powerful for sequence modeling. Nearly all state-of-the-art
language models and pre-trained language models are based on the Transformer
architecture. However, it distinguishes sequential tokens only with the token
position index. We hypothesize that better contextual representations can be
generated from the Transformer with richer positional information. To verify
this, we propose a segment-aware Transformer (Segatron), by replacing the
original token position encoding with a combined position encoding of
paragraph, sentence, and token. We first introduce the segment-aware mechanism
to Transformer-XL, which is a popular Transformer-based language model with
memory extension and relative position encoding. We find that our method can
further improve the Transformer-XL base model and large model, achieving 17.1
perplexity on the WikiText-103 dataset. We further investigate the pre-training
masked language modeling task with Segatron. Experimental results show that
BERT pre-trained with Segatron (SegaBERT) can outperform BERT with vanilla
Transformer on various NLP tasks, and outperforms RoBERTa on zero-shot sentence
representation learning.Comment: Accepted by AAAI 202
- …