11,020 research outputs found
Joint Tensor Factorization and Outlying Slab Suppression with Applications
We consider factoring low-rank tensors in the presence of outlying slabs.
This problem is important in practice, because data collected in many
real-world applications, such as speech, fluorescence, and some social network
data, fit this paradigm. Prior work tackles this problem by iteratively
selecting a fixed number of slabs and fitting, a procedure which may not
converge. We formulate this problem from a group-sparsity promoting point of
view, and propose an alternating optimization framework to handle the
corresponding () minimization-based low-rank tensor
factorization problem. The proposed algorithm features a similar per-iteration
complexity as the plain trilinear alternating least squares (TALS) algorithm.
Convergence of the proposed algorithm is also easy to analyze under the
framework of alternating optimization and its variants. In addition,
regularization and constraints can be easily incorporated to make use of
\emph{a priori} information on the latent loading factors. Simulations and real
data experiments on blind speech separation, fluorescence data analysis, and
social network mining are used to showcase the effectiveness of the proposed
algorithm
Convolutive Blind Source Separation Methods
In this chapter, we provide an overview of existing algorithms for blind source separation of convolutive audio mixtures. We provide a taxonomy, wherein many of the existing algorithms can be organized, and we present published results from those algorithms that have been applied to real-world audio separation tasks
Massive MIMO is a Reality -- What is Next? Five Promising Research Directions for Antenna Arrays
Massive MIMO (multiple-input multiple-output) is no longer a "wild" or
"promising" concept for future cellular networks - in 2018 it became a reality.
Base stations (BSs) with 64 fully digital transceiver chains were commercially
deployed in several countries, the key ingredients of Massive MIMO have made it
into the 5G standard, the signal processing methods required to achieve
unprecedented spectral efficiency have been developed, and the limitation due
to pilot contamination has been resolved. Even the development of fully digital
Massive MIMO arrays for mmWave frequencies - once viewed prohibitively
complicated and costly - is well underway. In a few years, Massive MIMO with
fully digital transceivers will be a mainstream feature at both sub-6 GHz and
mmWave frequencies. In this paper, we explain how the first chapter of the
Massive MIMO research saga has come to an end, while the story has just begun.
The coming wide-scale deployment of BSs with massive antenna arrays opens the
door to a brand new world where spatial processing capabilities are
omnipresent. In addition to mobile broadband services, the antennas can be used
for other communication applications, such as low-power machine-type or
ultra-reliable communications, as well as non-communication applications such
as radar, sensing and positioning. We outline five new Massive MIMO related
research directions: Extremely large aperture arrays, Holographic Massive MIMO,
Six-dimensional positioning, Large-scale MIMO radar, and Intelligent Massive
MIMO.Comment: 20 pages, 9 figures, submitted to Digital Signal Processin
Near-Instantaneously Adaptive HSDPA-Style OFDM Versus MC-CDMA Transceivers for WIFI, WIMAX, and Next-Generation Cellular Systems
Burts-by-burst (BbB) adaptive high-speed downlink packet access (HSDPA) style multicarrier systems are reviewed, identifying their most critical design aspects. These systems exhibit numerous attractive features, rendering them eminently eligible for employment in next-generation wireless systems. It is argued that BbB-adaptive or symbol-by-symbol adaptive orthogonal frequency division multiplex (OFDM) modems counteract the near instantaneous channel quality variations and hence attain an increased throughput or robustness in comparison to their fixed-mode counterparts. Although they act quite differently, various diversity techniques, such as Rake receivers and space-time block coding (STBC) are also capable of mitigating the channel quality variations in their effort to reduce the bit error ratio (BER), provided that the individual antenna elements experience independent fading. By contrast, in the presence of correlated fading imposed by shadowing or time-variant multiuser interference, the benefits of space-time coding erode and it is unrealistic to expect that a fixed-mode space-time coded system remains capable of maintaining a near-constant BER
Real-time Sound Source Separation For Music Applications
Sound source separation refers to the task of extracting individual sound sources from some number of mixtures of those sound sources. In this thesis, a novel sound source separation algorithm for musical applications is presented. It leverages the fact that the vast majority of commercially recorded music since the 1950s has been mixed down for two channel reproduction, more commonly known as stereo. The algorithm presented in Chapter 3 in this thesis requires no prior knowledge or learning and performs the task of separation based purely on azimuth discrimination within the stereo field. The algorithm exploits the use of the pan pot as a means to achieve image localisation within stereophonic recordings. As such, only an interaural intensity difference exists between left and right channels for a single source. We use gain scaling and phase cancellation techniques to expose frequency dependent nulls across the azimuth domain, from which source separation and resynthesis is carried out. The algorithm is demonstrated to be state of the art in the field of sound source separation but also to be a useful pre-process to other tasks such as music segmentation and surround sound upmixing
A critical review of approaches to mitigating bias in fingerprint identification
Fingerprint identification is a discipline used within forensic science which assists in criminal investigations1, 2. The process of fingerprint identification involves the comparison of crime scene evidence with known exemplars. This form of examination is heavily reliant on human examiners and their conclusions as to whether there is an identification, exclusion or insufficient information to identify3. This form of forensic identification has become a focus due to concern of the effects of cognitive bias on examiners conclusions. Concerns have prompted research into the area of approaches to mitigate bias throughout forensic fingerprint protocols. Research into the common sources of bias during a fingerprint examination was conducted to gain an understanding of how bias may potentially be reduced. Throughout this dissertation the psychological and forensic approaches to bias were reviewed and the international and Australian approaches to bias mitigation were discussed. This found that there was evidence of a widespread issue regarding human cognitive bias in fingerprint examiners, however, there were no uniform mitigation strategies in place. Limitations to recommended approaches and currently implemented strategies have been reviewed, identifying that there is still a need for further research into the theoretical approaches to overcome bias. Therefore, leading to the formation of a study that aims to identify the theoretical approaches as suggested by literature, and critically review the effectiveness of these methods in controlling and reducing bias. The potential outcome from the suggested study may result in a useful document that will provide the practical field of forensic science with a comprehensive and critical review of approaches to assist in the development of standardised protocols
Tensor Analysis and Fusion of Multimodal Brain Images
Current high-throughput data acquisition technologies probe dynamical systems
with different imaging modalities, generating massive data sets at different
spatial and temporal resolutions posing challenging problems in multimodal data
fusion. A case in point is the attempt to parse out the brain structures and
networks that underpin human cognitive processes by analysis of different
neuroimaging modalities (functional MRI, EEG, NIRS etc.). We emphasize that the
multimodal, multi-scale nature of neuroimaging data is well reflected by a
multi-way (tensor) structure where the underlying processes can be summarized
by a relatively small number of components or "atoms". We introduce
Markov-Penrose diagrams - an integration of Bayesian DAG and tensor network
notation in order to analyze these models. These diagrams not only clarify
matrix and tensor EEG and fMRI time/frequency analysis and inverse problems,
but also help understand multimodal fusion via Multiway Partial Least Squares
and Coupled Matrix-Tensor Factorization. We show here, for the first time, that
Granger causal analysis of brain networks is a tensor regression problem, thus
allowing the atomic decomposition of brain networks. Analysis of EEG and fMRI
recordings shows the potential of the methods and suggests their use in other
scientific domains.Comment: 23 pages, 15 figures, submitted to Proceedings of the IEE
COrE (Cosmic Origins Explorer) A White Paper
COrE (Cosmic Origins Explorer) is a fourth-generation full-sky,
microwave-band satellite recently proposed to ESA within Cosmic Vision
2015-2025. COrE will provide maps of the microwave sky in polarization and
temperature in 15 frequency bands, ranging from 45 GHz to 795 GHz, with an
angular resolution ranging from 23 arcmin (45 GHz) and 1.3 arcmin (795 GHz) and
sensitivities roughly 10 to 30 times better than PLANCK (depending on the
frequency channel). The COrE mission will lead to breakthrough science in a
wide range of areas, ranging from primordial cosmology to galactic and
extragalactic science. COrE is designed to detect the primordial gravitational
waves generated during the epoch of cosmic inflation at more than
for . It will also measure the CMB gravitational lensing
deflection power spectrum to the cosmic variance limit on all linear scales,
allowing us to probe absolute neutrino masses better than laboratory
experiments and down to plausible values suggested by the neutrino oscillation
data. COrE will also search for primordial non-Gaussianity with significant
improvements over Planck in its ability to constrain the shape (and amplitude)
of non-Gaussianity. In the areas of galactic and extragalactic science, in its
highest frequency channels COrE will provide maps of the galactic polarized
dust emission allowing us to map the galactic magnetic field in areas of
diffuse emission not otherwise accessible to probe the initial conditions for
star formation. COrE will also map the galactic synchrotron emission thirty
times better than PLANCK. This White Paper reviews the COrE science program,
our simulations on foreground subtraction, and the proposed instrumental
configuration.Comment: 90 pages Latex 15 figures (revised 28 April 2011, references added,
minor errors corrected
- …