800 research outputs found
Robust globally divergence-free weak Galerkin finite element methods for natural convection problems
This paper proposes and analyzes a class of weak Galerkin (WG) finite element
methods for stationary natural convection problems in two and three dimensions.
We use piecewise polynomials of degrees k, k-1, and k(k>=1) for the velocity,
pressure, and temperature approximations in the interior of elements,
respectively, and piecewise polynomials of degrees l, k, l(l = k-1,k) for the
numerical traces of velocity, pressure and temperature on the interfaces of
elements. The methods yield globally divergence-free velocity solutions.
Well-posedness of the discrete scheme is established, optimal a priori error
estimates are derived, and an unconditionally convergent iteration algorithm is
presented. Numerical experiments confirm the theoretical results and show the
robustness of the methods with respect to Rayleigh number.Comment: 32 pages, 13 figure
animation : An R Package for Creating Animations and Demonstrating Statistical Methods
Animated graphs that demonstrate statistical ideas and methods can both attract interest and assist understanding. In this paper we first discuss how animations can be related to some statistical topics such as iterative algorithms, random simulations, (re)sampling methods and dynamic trends, then we describe the approaches that may be used to create animations, and give an overview to the R package animation, including its design, usage and the statistical topics in the package. With the animation package, we can export the animations produced by R into a variety of formats, such as a web page, a GIF animation, a Flash movie, a PDF document, or an MP4/AVI video, so that users can publish the animations fairly easily. The design of this package is flexible enough to be readily incorporated into web applications, e.g., we can generate animations online with Rweb, which means we do not even need R to be installed locally to create animations. We will show examples of the use of animations in teaching statistics and in the presentation of statistical reports using Sweave or knitr. In fact, this paper itself was written with the knitr and animation package, and the animations are embedded in the PDF document, so that readers can watch the animations in real time when they read the paper (the Adobe Reader is required).Animations can add insight and interest to traditional static approaches to teaching statistics and reporting, making statistics a more interesting and appealing subject
Dynamic Graphics and Reporting for Statistics
Statistics as a scientific discipline has a dynamic nature, which can be
observed in many statistical algorithms and theories as well as in data
analysis. For example, asymptotic theories in statistics are inherently
dynamic: they describe how a statistic or an estimator behaves as the sample
size increases. Data analysis is almost never a static process. Instead, it
is an iterative process involving cleaning, describing, modeling, and
re-cleaning the data. Reports may end up being re-written due to changes in
the data and analysis.
This thesis consists of three parts, addressing the dynamic aspects of
statistics and data analysis. In the first part, we show how to explain the
ideas behind some statistical methods using animations, followed by an
introduction to the design and functionality of the animation package. In
the second part, we discuss the design of an interactive statistical
graphics system, with an emphasis on the reactive programming paradigm and
its connection with the data infrastructure in R, as utilized in the cranvas
package. In the third part, we provide a solution to statistical reporting,
which is implemented in the knitr package, making use of literate
programming. It frees us from the traditional approach of cut-and-paste, and
provides a seamless integration of computing and reporting that enhances
reproducible research. Demos and examples were given along with the
discussion
Uformer: A Unet based dilated complex & real dual-path conformer network for simultaneous speech enhancement and dereverberation
Complex spectrum and magnitude are considered as two major features of speech
enhancement and dereverberation. Traditional approaches always treat these two
features separately, ignoring their underlying relationship. In this paper, we
propose Uformer, a Unet based dilated complex & real dual-path conformer
network in both complex and magnitude domain for simultaneous speech
enhancement and dereverberation. We exploit time attention (TA) and dilated
convolution (DC) to leverage local and global contextual information and
frequency attention (FA) to model dimensional information. These three
sub-modules contained in the proposed dilated complex & real dual-path
conformer module effectively improve the speech enhancement and dereverberation
performance. Furthermore, hybrid encoder and decoder are adopted to
simultaneously model the complex spectrum and magnitude and promote the
information interaction between two domains. Encoder decoder attention is also
applied to enhance the interaction between encoder and decoder. Our
experimental results outperform all SOTA time and complex domain models
objectively and subjectively. Specifically, Uformer reaches 3.6032 DNSMOS on
the blind test set of Interspeech 2021 DNS Challenge, which outperforms all
top-performed models. We also carry out ablation experiments to tease apart all
proposed sub-modules that are most important.Comment: Accepted by ICASSP 202
VE-KWS: Visual Modality Enhanced End-to-End Keyword Spotting
The performance of the keyword spotting (KWS) system based on audio modality,
commonly measured in false alarms and false rejects, degrades significantly
under the far field and noisy conditions. Therefore, audio-visual keyword
spotting, which leverages complementary relationships over multiple modalities,
has recently gained much attention. However, current studies mainly focus on
combining the exclusively learned representations of different modalities,
instead of exploring the modal relationships during each respective modeling.
In this paper, we propose a novel visual modality enhanced end-to-end KWS
framework (VE-KWS), which fuses audio and visual modalities from two aspects.
The first one is utilizing the speaker location information obtained from the
lip region in videos to assist the training of multi-channel audio beamformer.
By involving the beamformer as an audio enhancement module, the acoustic
distortions, caused by the far field or noisy environments, could be
significantly suppressed. The other one is conducting cross-attention between
different modalities to capture the inter-modal relationships and help the
representation learning of each modality. Experiments on the MSIP challenge
corpus show that our proposed model achieves 2.79% false rejection rate and
2.95% false alarm rate on the Eval set, resulting in a new SOTA performance
compared with the top-ranking systems in the ICASSP2022 MISP challenge.Comment: 5 pages. Accepted at ICASSP202
- …