180,105 research outputs found
Sherpa: a Mission-Independent Data Analysis Application
The ever-increasing quality and complexity of astronomical data underscores
the need for new and powerful data analysis applications. This need has led to
the development of Sherpa, a modeling and fitting program in the CIAO software
package that enables the analysis of multi-dimensional, multi-wavelength data.
In this paper, we present an overview of Sherpa's features, which include:
support for a wide variety of input and output data formats, including the new
Model Descriptor List (MDL) format; a model language which permits the
construction of arbitrarily complex model expressions, including ones
representing instrument characteristics; a wide variety of fit statistics and
methods of optimization, model comparison, and parameter estimation;
multi-dimensional visualization, provided by ChIPS; and new interactive
analysis capabilities provided by embedding the S-Lang interpreted scripting
language. We conclude by showing example Sherpa analysis sessions.Comment: To appear in Proc. SPIE Conf. 4477. 12 pages, 4 figure
Mismatching-Aware Unsupervised Translation Quality Estimation For Low-Resource Languages
Translation Quality Estimation (QE) is the task of predicting the quality of
machine translation (MT) output without any reference. This task has gained
increasing attention as an important component in the practical applications of
MT. In this paper, we first propose XLMRScore, which is a cross-lingual
counterpart of BERTScore computed via the XLM-RoBERTa (XLMR) model. This metric
can be used as a simple unsupervised QE method, while employing it results in
two issues: firstly, the untranslated tokens leading to unexpectedly high
translation scores, and secondly, the issue of mismatching errors between
source and hypothesis tokens when applying the greedy matching in XLMRScore. To
mitigate these issues, we suggest replacing untranslated words with the unknown
token and the cross-lingual alignment of the pre-trained model to represent
aligned words closer to each other, respectively. We evaluate the proposed
method on four low-resource language pairs of WMT21 QE shared task, as well as
a new English-Farsi test dataset introduced in this paper. Experiments show
that our method could get comparable results with the supervised baseline for
two zero-shot scenarios, i.e., with less than 0.01 difference in Pearson
correlation, while outperforming unsupervised rivals in all the low-resource
language pairs for above 8%, on average.Comment: Submitted to Language Resources and Evaluatio
DNN adaptation by automatic quality estimation of ASR hypotheses
In this paper we propose to exploit the automatic Quality Estimation (QE) of
ASR hypotheses to perform the unsupervised adaptation of a deep neural network
modeling acoustic probabilities. Our hypothesis is that significant
improvements can be achieved by: i)automatically transcribing the evaluation
data we are currently trying to recognise, and ii) selecting from it a subset
of "good quality" instances based on the word error rate (WER) scores predicted
by a QE component. To validate this hypothesis, we run several experiments on
the evaluation data sets released for the CHiME-3 challenge. First, we operate
in oracle conditions in which manual transcriptions of the evaluation data are
available, thus allowing us to compute the "true" sentence WER. In this
scenario, we perform the adaptation with variable amounts of data, which are
characterised by different levels of quality. Then, we move to realistic
conditions in which the manual transcriptions of the evaluation data are not
available. In this case, the adaptation is performed on data selected according
to the WER scores "predicted" by a QE component. Our results indicate that: i)
QE predictions allow us to closely approximate the adaptation results obtained
in oracle conditions, and ii) the overall ASR performance based on the proposed
QE-driven adaptation method is significantly better than the strong, most
recent, CHiME-3 baseline.Comment: Computer Speech & Language December 201
Fog Computing in Medical Internet-of-Things: Architecture, Implementation, and Applications
In the era when the market segment of Internet of Things (IoT) tops the chart
in various business reports, it is apparently envisioned that the field of
medicine expects to gain a large benefit from the explosion of wearables and
internet-connected sensors that surround us to acquire and communicate
unprecedented data on symptoms, medication, food intake, and daily-life
activities impacting one's health and wellness. However, IoT-driven healthcare
would have to overcome many barriers, such as: 1) There is an increasing demand
for data storage on cloud servers where the analysis of the medical big data
becomes increasingly complex, 2) The data, when communicated, are vulnerable to
security and privacy issues, 3) The communication of the continuously collected
data is not only costly but also energy hungry, 4) Operating and maintaining
the sensors directly from the cloud servers are non-trial tasks. This book
chapter defined Fog Computing in the context of medical IoT. Conceptually, Fog
Computing is a service-oriented intermediate layer in IoT, providing the
interfaces between the sensors and cloud servers for facilitating connectivity,
data transfer, and queryable local database. The centerpiece of Fog computing
is a low-power, intelligent, wireless, embedded computing node that carries out
signal conditioning and data analytics on raw data collected from wearables or
other medical sensors and offers efficient means to serve telehealth
interventions. We implemented and tested an fog computing system using the
Intel Edison and Raspberry Pi that allows acquisition, computing, storage and
communication of the various medical data such as pathological speech data of
individuals with speech disorders, Phonocardiogram (PCG) signal for heart rate
estimation, and Electrocardiogram (ECG)-based Q, R, S detection.Comment: 29 pages, 30 figures, 5 tables. Keywords: Big Data, Body Area
Network, Body Sensor Network, Edge Computing, Fog Computing, Medical
Cyberphysical Systems, Medical Internet-of-Things, Telecare, Tele-treatment,
Wearable Devices, Chapter in Handbook of Large-Scale Distributed Computing in
Smart Healthcare (2017), Springe
Block-Online Multi-Channel Speech Enhancement Using DNN-Supported Relative Transfer Function Estimates
This work addresses the problem of block-online processing for multi-channel
speech enhancement. Such processing is vital in scenarios with moving speakers
and/or when very short utterances are processed, e.g., in voice assistant
scenarios. We consider several variants of a system that performs beamforming
supported by DNN-based voice activity detection (VAD) followed by
post-filtering. The speaker is targeted through estimating relative transfer
functions between microphones. Each block of the input signals is processed
independently in order to make the method applicable in highly dynamic
environments. Owing to the short length of the processed block, the statistics
required by the beamformer are estimated less precisely. The influence of this
inaccuracy is studied and compared to the processing regime when recordings are
treated as one block (batch processing). The experimental evaluation of the
proposed method is performed on large datasets of CHiME-4 and on another
dataset featuring moving target speaker. The experiments are evaluated in terms
of objective and perceptual criteria (such as signal-to-interference ratio
(SIR) or perceptual evaluation of speech quality (PESQ), respectively).
Moreover, word error rate (WER) achieved by a baseline automatic speech
recognition system is evaluated, for which the enhancement method serves as a
front-end solution. The results indicate that the proposed method is robust
with respect to short length of the processed block. Significant improvements
in terms of the criteria and WER are observed even for the block length of 250
ms.Comment: 10 pages, 8 figures, 4 tables. Modified version of the article
accepted for publication in IET Signal Processing journal. Original results
unchanged, additional experiments presented, refined discussion and
conclusion
- …