728 research outputs found
Hybrid beamforming for single carrier mmWave MIMO systems
Hybrid analog and digital beamforming (HBF) has been recognized as an
attractive technique offering a tradeoff between hardware implementation
limitation and system performance for future broadband millimeter wave (mmWave)
communications. In contrast to most current works focusing on the HBF design
for orthogonal frequency division multiplexing based mmWave systems, this paper
investigates the HBF design for single carrier (SC) systems due to the
advantage of low peak-to-average power ratio in transmissions. By applying the
alternating minimization method, we propose an efficient HBF scheme based on
the minimum mean square error criterion. Simulation results show that the
proposed scheme outperforms the conventional HBF scheme for SC systems.Comment: IEEE GlobalSIP2018, Feb. 201
効果的な河川生息場の再生のための土砂還元に伴う生態-水文-河床地形的効果に関する研究
京都大学新制・課程博士博士(工学)甲第24593号工博第5099号新制||工||1976(附属図書館)京都大学大学院工学研究科都市社会工学専攻(主査)教授 角 哲也, 准教授 竹門 康弘, 准教授 Kantoush Sameh学位規則第4条第1項該当Doctor of Philosophy (Engineering)Kyoto UniversityDFA
An LSTM-Based Predictive Monitoring Method for Data with Time-varying Variability
The recurrent neural network and its variants have shown great success in
processing sequences in recent years. However, this deep neural network has not
aroused much attention in anomaly detection through predictively process
monitoring. Furthermore, the traditional statistic models work on assumptions
and hypothesis tests, while neural network (NN) models do not need that many
assumptions. This flexibility enables NN models to work efficiently on data
with time-varying variability, a common inherent aspect of data in practice.
This paper explores the ability of the recurrent neural network structure to
monitor processes and proposes a control chart based on long short-term memory
(LSTM) prediction intervals for data with time-varying variability. The
simulation studies provide empirical evidence that the proposed model
outperforms other NN-based predictive monitoring methods for mean shift
detection. The proposed method is also applied to time series sensor data,
which confirms that the proposed method is an effective technique for detecting
abnormalities.Comment: 19 pages, 9 figures, 6 table
Alphabet of one-loop Feynman integrals
In this paper, we present the universal structure of the alphabet of one-loop
Feynman integrals. The letters in the alphabet are calculated using the Baikov
representation with cuts. We consider both convergent and divergent cut
integrals and observe that letters in the divergent cases can be easily
obtained from convergent cases by applying certain limits. The letters are
written as simple expressions in terms of various Gram determinants. The
knowledge of the alphabet enables us to easily construct the canonical
differential equations of the form and aids in bootstrapping the
symbols of the solutions.Comment: 13 pages, 2 figures; v3: published version in Chinese physics
Intersection theory rules symbology
We propose a novel method to determine the structure of symbols for a family
of polylogarithmic Feynman integrals. Using the d log-bases and simple formulas
for the first- and second-order contributions to the intersection numbers, we
give a streamlined procedure to compute the entries in the coefficient matrices
of canonical differential equations, including the symbol letters and the
rational coefficients. We also provide a selection rule to decide whether a
given matrix element must be zero. The symbol letters are deeply related with
the poles of the integrands, and also have interesting connections to the
geometry of Newton polytopes. Our method will have important applications in
cutting-edge multi-loop calculations. The simplicity of our results also hints
at possible underlying structure in perturbative quantum field theories.Comment: 7 pages, 1 figur
ShareGPT4V: Improving Large Multi-Modal Models with Better Captions
In the realm of large multi-modal models (LMMs), efficient modality alignment
is crucial yet often constrained by the scarcity of high-quality image-text
data. To address this bottleneck, we introduce the ShareGPT4V dataset, a
pioneering large-scale resource featuring 1.2 million highly descriptive
captions, which surpasses existing datasets in diversity and information
content, covering world knowledge, object properties, spatial relationships,
and aesthetic evaluations. Specifically, ShareGPT4V originates from a curated
100K high-quality captions collected from advanced GPT4-Vision and has been
expanded to 1.2M with a superb caption model trained on this subset. ShareGPT4V
first demonstrates its effectiveness for the Supervised Fine-Tuning (SFT)
phase, by substituting an equivalent quantity of detailed captions in existing
SFT datasets with a subset of our high-quality captions, significantly
enhancing the LMMs like LLaVA-7B, LLaVA-1.5-13B, and Qwen-VL-Chat-7B on the MME
and MMBench benchmarks, with respective gains of 222.8/22.0/22.3 and
2.7/1.3/1.5. We further incorporate ShareGPT4V data into both the pre-training
and SFT phases, obtaining ShareGPT4V-7B, a superior LMM based on a simple
architecture that has remarkable performance across a majority of the
multi-modal benchmarks. This project is available at
https://ShareGPT4V.github.io to serve as a pivotal resource for advancing the
LMMs community.Comment: Project: https://ShareGPT4V.github.i
- …