1,351,832 research outputs found
Front-end Multiplexing - applied to SQUID multiplexing : Athena X-IFU and QUBIC experiments
As we have seen for digital camera market and a sensor resolution increasing
to "megapixels", all the scientific and high-tech imagers (whatever the wave
length - from radio to X-ray range) tends also to always increases the pixels
number. So the constraints on front-end signals transmission increase too. An
almost unavoidable solution to simplify integration of large arrays of pixels
is front-end multiplexing. Moreover, "simple" and "efficient" techniques allow
integration of read-out multiplexers in the focal plane itself. For instance,
CCD (Charge Coupled Device) technology has boost number of pixels in digital
camera. Indeed, this is exactly a planar technology which integrates both the
sensors and a front-end multiplexed readout. In this context, front-end
multiplexing techniques will be discussed for a better understanding of their
advantages and their limits. Finally, the cases of astronomical instruments in
the millimeter and in the X-ray ranges using SQUID (Superconducting QUantum
Interference Device) will be described
Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments
Eliminating the negative effect of non-stationary environmental noise is a
long-standing research topic for automatic speech recognition that stills
remains an important challenge. Data-driven supervised approaches, including
ones based on deep neural networks, have recently emerged as potential
alternatives to traditional unsupervised approaches and with sufficient
training, can alleviate the shortcomings of the unsupervised methods in various
real-life acoustic environments. In this light, we review recently developed,
representative deep learning approaches for tackling non-stationary additive
and convolutional degradation of speech with the aim of providing guidelines
for those involved in the development of environmentally robust speech
recognition systems. We separately discuss single- and multi-channel techniques
developed for the front-end and back-end of speech recognition systems, as well
as joint front-end and back-end training frameworks
Impacts of Radar Echoes on Internal Calibration Signals in the TerraSAR-X Instrument
For calibrating and monitoring the required radiometric stability, the radar instrument of TerraSAR-X features an internal calibration facility coupling into an additional port of the TRMs. Calibration pulses are routed through the front-end to characterise critical elements and parameters of the transmit (TX) and receive (RX) path. Changes in the signal path appear due to thermal effects, degradation, or extreme conditions in space. Especially the front-end TRMs controlling the phased array antenna are of crucial significance for the instrument reliability.
There are many indications that the interference of the RX-Calibration signals is caused by an echo from a transmitted TerraSAR-X chirp pulse of the same data take. As consequently implemented in the TerraSAR-X system, different approaches solve these effects of signal interference. In orbit, the commanding sequence can be optimised for avoiding interference. At processing level, averaging techniques minimise the noise effects inside the calibration signals. This paper presents the effects of the radar echoes on the whole internal calibration process and how they can be detected and minimised
Robust Sound Event Classification using Deep Neural Networks
The automatic recognition of sound events by computers is an important aspect of emerging applications such as automated surveillance, machine hearing and auditory scene understanding. Recent advances in machine learning, as well as in computational models of the human auditory system, have contributed to advances in this increasingly popular research field. Robust sound event classification, the ability to recognise sounds under real-world noisy conditions, is an especially challenging task. Classification methods translated from the speech recognition domain, using features such as mel-frequency cepstral coefficients, have been shown to perform reasonably well for the sound event classification task, although spectrogram-based or auditory image analysis techniques reportedly achieve superior performance in noise.
This paper outlines a sound event classification framework that compares auditory image front end features with spectrogram image-based front end features, using support vector machine and deep neural network classifiers. Performance is evaluated on a standard robust classification task in different levels of corrupting noise, and with several system enhancements, and shown to compare very well with current state-of-the-art classification techniques
Compositional Performance Modelling with the TIPPtool
Stochastic process algebras have been proposed as compositional specification formalisms for performance models. In this paper, we describe a tool which aims at realising all beneficial aspects of compositional performance modelling, the TIPPtool. It incorporates methods for compositional specification as well as solution, based on state-of-the-art techniques, and wrapped in a user-friendly graphical front end. Apart from highlighting the general benefits of the tool, we also discuss some lessons learned during development and application of the TIPPtool. A non-trivial model of a real life communication system serves as a case study to illustrate benefits and limitations
- …
