1,105 research outputs found
A possible explanation why the Theta+ is seen in some experiments and not in others
To understand the whole set of positive and null data on the
Theta+(1530)-production, we suggest the hypothesis that multiquark hadrons are
mainly generated from many-quark states, which emerge either as short-term
hadron fluctuations, or as hadron remnants in hard processes. This approach
allows us to describe both non-observation of the Theta+ in current null
experiments and peculiar features of its production in positive experiments.
Further, we are able to propose new experiments that might be decisive for the
problem of the Theta+ existence. Distributions of the Theta+ in such
experiments can give important information both on higher Fock components of
conventional hadrons and about structure and hadronization properties of hadron
remnants produced in hard processes. We also explain that description of
multiquark hadrons may require a modified form of the constituent quark model,
with quark masses and couplings being intermediate between their values for the
familiar constituent quarks and the current ones.Comment: 18 pages. Some changes in the text; experimental suggestions
collected in a special subsection, references added and refreshe
Computer-Assisted Algorithms for Ultrasound Imaging Systems
Ultrasound imaging works on the principle of transmitting ultrasound waves into the body and
reconstructs the images of internal organs based on the strength of the echoes. Ultrasound imaging
is considered to be safer, economical and can image the organs in real-time, which makes it widely
used diagnostic imaging modality in health-care. Ultrasound imaging covers the broad spectrum
of medical diagnostics; these include diagnosis of kidney, liver, pancreas, fetal monitoring, etc.
Currently, the diagnosis through ultrasound scanning is clinic-centered, and the patients who are
in need of ultrasound scanning has to visit the hospitals for getting the diagnosis. The services of
an ultrasound system are constrained to hospitals and did not translate to its potential in remote
health-care and point-of-care diagnostics due to its high form factor, shortage of sonographers, low
signal to noise ratio, high diagnostic subjectivity, etc. In this thesis, we address these issues with an
objective of making ultrasound imaging more reliable to use in point-of-care and remote health-care
applications. To achieve the goal, we propose (i) computer-assisted algorithms to improve diagnostic
accuracy and assist semi-skilled persons in scanning, (ii) speckle suppression algorithms to improve
the diagnostic quality of ultrasound image, (iii) a reliable telesonography framework to address
the shortage of sonographers, and (iv) a programmable portable ultrasound scanner to operate in
point-of-care and remote health-care applications
Interference Alignment for Cognitive Radio Communications and Networks: A Survey
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).Interference alignment (IA) is an innovative wireless transmission strategy that has shown to be a promising technique for achieving optimal capacity scaling of a multiuser interference channel at asymptotically high-signal-to-noise ratio (SNR). Transmitters exploit the availability of multiple signaling dimensions in order to align their mutual interference at the receivers. Most of the research has focused on developing algorithms for determining alignment solutions as well as proving interference alignment’s theoretical ability to achieve the maximum degrees of freedom in a wireless network. Cognitive radio, on the other hand, is a technique used to improve the utilization of the radio spectrum by opportunistically sensing and accessing unused licensed frequency spectrum, without causing harmful interference to the licensed users. With the increased deployment of wireless services, the possibility of detecting unused frequency spectrum becomes diminished. Thus, the concept of introducing interference alignment in cognitive radio has become a very attractive proposition. This paper provides a survey of the implementation of IA in cognitive radio under the main research paradigms, along with a summary and analysis of results under each system model.Peer reviewe
High Frame Rate Volumetric Imaging of Microbubbles Using a Sparse Array and Spatial Coherence Beamforming
Volumetric ultrasound imaging of blood flow with microbubbles enables a more complete visualization of the microvasculature. Sparse arrays are ideal candidates to perform volumetric imaging at reduced manufacturing complexity and cable count. However, due to the small number of transducer elements, sparse arrays often come with high clutter levels, especially when wide beams are transmitted to increase the frame rate. In this study, we demonstrate with a prototype sparse array probe and a diverging wave transmission strategy, that a uniform transmission field can be achieved. With the implementation of a spatial coherence beamformer, the background clutter signal can be effectively suppressed, leading to a signal to background ratio improvement of 25 dB. With this approach, we demonstrate the volumetric visualization of single microbubbles in a tissue-mimicking phantom as well as vasculature mapping in a live chicken embryo chorioallantoic membrane
Diffusion and perfusion MRI and applications in cerebral ischaemia
Two MRI techniques, namely diffusion and perfusion imaging, are becoming increasingly used for evaluation of the pathophysiology of stroke. This work describes the use of these techniques, together with more conventional MRI modalities (such as T1, and T2 imaging) in the investigation of cerebral ischaemia. The work was performed both in a paediatric population in a whole-body clinical MR system (1.5 T) and in an animal model of focal ischaemia at high magnetic field strength (8.5 T).
For the paediatric studies, a single shot echo planar imaging (EPI) sequence was developed to enable the on-line calculation of maps of the trace of the diffusion tensor. In the process of this development, it was necessary to address two different imaging artefacts in these maps: eddy current induced image shifts, and residual Nyquist ghost artefacts. Perfusion imaging was implemented using an EPI sequence to follow the passage through the brain of a bolus of a paramagnetic contrast agent. Computer simulations were performed to evaluate the limitations of this technique in the quantification of cerebral blood flow when delay in the arrival and dispersion of the bolus of contrast agent are not accounted for. These MRI techniques were applied to paediatric patients to identify acute ischaemic events, as well as to differentiate between multiple acute events, or between acute and chronic events. Furthermore, the diffusion and perfusion findings were shown to contribute significantly to the management of patients with high risk of stroke, and in the evaluation of treatment outcome.
In the animal experiments, permanent middle cerebral artery occlusion was performed in rats to investigate longitudinally the acute MRI changes (first 4-6 hours) following an ischaemic event. This longitudinal analysis contributed to the understanding of the evolution of the ischaemic lesion. Furthermore, the findings allowed the acute identification of tissue 'at risk' of infarction
Prediction of the Outcome in Cardiac Arrest Patients Undergoing Hypothermia Using EEG Wavelet Entropy
Cardiac arrest (CA) is the leading cause of death in the United States. Induction of hypothermia has been found to improve the functional recovery of CA patients after resuscitation. However, there is no clear guideline for the clinicians yet to determine the prognosis of the CA when patients are treated with hypothermia. The present work aimed at the development of a prognostic marker for the CA patients undergoing hypothermia. A quantitative measure of the complexity of Electroencephalogram (EEG) signals, called wavelet sub-band entropy, was employed to predict the patients’ outcomes. We hypothesized that the EEG signals of the patients who survived would demonstrate more complexity and consequently higher values of wavelet sub-band entropies.
A dataset of 16-channel EEG signals collected from CA patients undergoing hypothermia at Long Beach Memorial Medical Center was used to test the hypothesis. Following preprocessing of the signals and implementation of the wavelet transform, the wavelet sub-band entropies were calculated for different frequency bands and EEG channels. Then the values of wavelet sub-band entropies were compared among two groups of patients: survived vs. non-survived. Our results revealed that the brain high frequency oscillations (between 64-100 Hz) captured from the inferior frontal lobes are significantly more complex in the CA patients who survived (pvalue ≤ 0.02). Given that the non-invasive measurement of EEG is part of the standard clinical assessment for CA patients, the results of this study can enhance the management of the CA patients treated with hypothermia
Multi-user MIMO wireless communications
Mehrantennensysteme sind auf Grund der erhöhten Bandbreiteneffizienz und
Leistung eine Schlüsselkomponente von Mobilfunksystemen der Zukunft. Diese
ermöglichen das gleichzeitige Senden von mehreren, räumlich getrennten
Datenströmen zu verschiedenen Nutzern. Die zentrale Fragestellung in der Praxis
ist, ob der ursprünglich vorausgesagte Kapazitätsgewinn in realistischen
Szenarios erreicht wird und welche spezifischen Gewinne durch zusätzliche
Antennen und das Ausnutzen von Kanalkenntnis am Sender und Empfänger erzielt
werden, was andererseits einen Zuwachs an Overhead oder nötiger Rechenleistung
bedeutet.
In dieser Arbeit werden neue lineare und nicht-lineare MU-MIMO Precoding-
Verfahren vorgestellt. Der verfolgte Ansatz zur Bestimmung der Precoding-
Matrizen ist allgemein anwendbar und die entstandenen Algorithmen können zur
Optimierung von verschiedenen Kriterien mit beliebig vielen Antennen an der
Mobilstation eingesetzt werden. Das wurde durch die Berechnung der Precoding-
Matrix in zwei Schritten erreicht. Im ersten Schritt wird die Überschneidung der
Zeilenräume minimiert, die durch die effektiven Kanalmatrizen verschiedener
Nutzer aufgespannt werden. Basierend auf mehreren parallelen Einzelnutzer-MIMO-
Kanälen wird im zweiten Schritt die Systemperformanz bezüglich bestimmter
Kriterien optimiert.
Aus der gängigen Literatur ist bereits bekannt, dass für Nutzer mit nur einer
Antenne das MMSE Kriterium beim precoding optimal aber nicht bei Nutzern mit
mehreren Antennen. Deshalb werden in dieser Arbeit zwei neue Mehrnutzer MIMO
Strategien vorgestellt, die vom MSE Kriterium abgeleitet sind, nämlich
sukzessives MMSE und RBD. Bei der sukzessiven Verarbeitung mit einer
entsprechenden Anpassung der Sendeleistungsverteilung kann die volle Diversität
des Systems ausgeschöpft werden. Die Kapazität nähert sich dabei der maximalen
Summenrate des Systems an. Bei gemeinsamer Verarbeitung der MIMO Kanäle wird
unabhängig vom Grad der Mehrnutzerinterferenz die maximale Diversität erreicht.
Die genannten Techniken setzen entweder eine aktuelle oder eine über einen
längeren Zeitraum gemittelte Kanalkenntnis voraus. Aus diesem Grund müssen die
Auswirkungen von Kanal-Schätzfehlern und Einflüsse des Transceiver Front-Ends
auf die Verfahren näher untersucht werden.
Für eine weitergehende Abschätzung der Mehrantennensysteme muss die Performanz
des Gesamtsystems untersucht werden, da viele Einflüsse auf die räumliche
Signalverarbeitung bei Betrachtung eines einzelnen Links nicht erkennbar sind.
Es wurde gezeigt, dass mit MIMO Precoding Strategien ein Vielfaches der
Datenrate eines Systems mit nur einer Antenne erzielt werden kann, während der
Overhead durch Pilotsymbole und Steuersignale nur geringfügig zunimmt.Multiple-input, multiple-output (MIMO) systems are a key component of future
wireless communication systems, because of their promising improvement in terms
of performance and bandwidth efficiency. An important research topic is the
study of multi-user (MU) MIMO systems. Such systems have the potential to
combine the high throughput achievable with MIMO processing with the benefits of
space division multiple access (SDMA). The main question from a practical
standpoint is whether the initially predicted capacity gains can be obtained in
more realistic scenarios and what specific gains result from adding more
antennas and overhead or computational power to obtain channel state information
(CSI) at the transceivers.
In this thesis we introduce new linear and non-linear MU MIMO processing
techniques. The approach used for the design of the precoding matrix is general
and the resulting algorithms can address several optimization criteria with an
arbitrary number of antennas at the user terminals (UTs). This is achieved by
designing the precoding matrices in two steps. In the first step we minimize the
overlap of the row spaces spanned by the effective channel matrices of different
users. In the next step, we optimize the system performance with respect to the
specific optimization criterion assuming a set of parallel single-user MIMO
channels.
As it was previously reported in the literature, minimum mean-squared-error
(MMSE) processing is optimum for single-antenna UTs. However, MMSE suffers from
a performance loss when users are equipped with more than one antenna. The two
MU MIMO processing techniques that result from the two different MSE criteria
that are proposed in this thesis are successive MMSE and regularized block
diagonalization. By iterating the closed form solution with appropriate power
loading we are able to extract the full diversity in the system and empirically
approach the maximum sum-rate capacity in case of high multi-user interference.
Joint processing of MIMO channels yields maximum diversity regardless of the
level of multi-user interference.
As these techniques rely on the fact that there is either instantaneous or long-
term CSI available at the base station to perform precoding and decoding, it was
very important to investigate the influence of the transceiver front-end
imperfections and channel estimation errors on their performance.
For a comprehensive assessment of multi-antenna techniques, it is mandatory to
consider the performance at system level, since many effects of spatial
processing are not tractable at the link level. System level investigations have
shown that MU MIMO precoding techniques provide several times higher data rates
than single-input single-output systems with only slightly increased pilot and
control overhead
New Digital Audio Watermarking Algorithms for Copyright Protection
This thesis investigates the development of digital audio watermarking in addressing issues such as copyright protection. Over the past two decades, many digital watermarking algorithms have been developed, each with its own advantages and disadvantages. The main aim of this thesis was to develop a new watermarking algorithm within an existing Fast Fourier Transform framework. This resulted in the development of a Complex Spectrum Phase Evolution based watermarking algorithm. In this new implementation, the embedding positions were generated dynamically thereby rendering it more difficult for an attacker to remove, and watermark information was embedded by manipulation of the spectral components in the time domain thereby reducing any audible distortion. Further improvements were attained when the embedding criteria was based on bin location comparison instead of magnitude, thereby rendering it more robust against those attacks that interfere with the spectral magnitudes.
However, it was discovered that this new audio watermarking algorithm has some disadvantages such as a relatively low capacity and a non-consistent robustness for different audio files. Therefore, a further aim of this thesis was to improve the algorithm from a different perspective.
Improvements were investigated using an Singular Value Decomposition framework wherein a novel observation was discovered. Furthermore, a psychoacoustic model was incorporated to suppress any audible distortion. This resulted in a watermarking algorithm which achieved a higher capacity and a more consistent robustness.
The overall result was that two new digital audio watermarking algorithms were developed which were complementary in their performance thereby opening more opportunities for further research
- …