1,754 research outputs found
Doctor of Philosophy
dissertationThe use of multicarrier techniques has allowed the rapid expansion of broadband wireless communications. Orthogonal frequency division multiplexing (OFDM) has been the most dominant technology in the past decade. It has been deployed in both indoor Wi-Fi and cellular environments, and has been researched for use in underwater acoustic channels. Recent works in wireless communications include the extension of OFDM to multiple access applications. Multiple access OFDM, or orthogonal frequency division multiple access (OFDMA), has been implemented in the third generation partnership project (3GPP) long- term evolution (LTE) downlink. In order to reduce the intercarrier interference (ICI) when user's synchronization is relaxed, filterbank multicarrier communication (FBMC) systems have been proposed. The first contribution made in this dissertation is a novel study of the classical FBMC systems that were presented in 1960s. We note that two distinct methods were presented then. We show that these methods are closely related through a modulation and a time/frequency scaling step. For cellular channels, OFDM also has the weakness of relatively large peak-to-average power ratios (PAPR). A special form of OFDM for the uplink of multiple access networks, called single carrier frequency division multiple access (SC-FDMA), has been developed to mitigate this issue. In this regard, this dissertation makes two contributions. First, we develop an optimization method for designing an effective precoding method for SC-FDMA systems. Second, we show how an equivalent to SC-FDMA can be developed for systems that are based on FBMC. In underwater acoustic communications applications, researchers are investigating the use of multicarrier communication systems like OFDM in underwater channels. The movement of the communicating vehicles scales the received signal along the time axis, which is often referred to as Doppler scaling. To undo the signal degradation, researchers have investigated methods to estimate the Doppler scaling factor and restore the original signal using resampling. We investigate a method called nonuniform fast Fourier transform (NUFFT) and apply that to increase the precision in the detection and correction of the Doppler scaling factor. NUFFT is applied to both OFDM and FBMC and its performance over the experimental data obtained from at sea experiments is investigated
Integration of anatomical and hemodynamical information in magnetic resonance angiography
+118hlm.;24c
Detail Enhancing Denoising of Digitized 3D Models from a Mobile Scanning System
The acquisition process of digitizing a large-scale environment produces an enormous amount of raw geometry data. This data is corrupted by system noise, which leads to 3D surfaces that are not smooth and details that are distorted. Any scanning system has noise associate with the scanning hardware, both digital quantization errors and measurement inaccuracies, but a mobile scanning system has additional system noise introduced by the pose estimation of the hardware during data acquisition. The combined system noise generates data that is not handled well by existing noise reduction and smoothing techniques.
This research is focused on enhancing the 3D models acquired by mobile scanning systems used to digitize large-scale environments. These digitization systems combine a variety of sensors – including laser range scanners, video cameras, and pose estimation hardware – on a mobile platform for the quick acquisition of 3D models of real world environments. The data acquired by such systems are extremely noisy, often with significant details being on the same order of magnitude as the system noise. By utilizing a unique 3D signal analysis tool, a denoising algorithm was developed that identifies regions of detail and enhances their geometry, while removing the effects of noise on the overall model.
The developed algorithm can be useful for a variety of digitized 3D models, not just those involving mobile scanning systems. The challenges faced in this study were the automatic processing needs of the enhancement algorithm, and the need to fill a hole in the area of 3D model analysis in order to reduce the effect of system noise on the 3D models. In this context, our main contributions are the automation and integration of a data enhancement method not well known to the computer vision community, and the development of a novel 3D signal decomposition and analysis tool. The new technologies featured in this document are intuitive extensions of existing methods to new dimensionality and applications. The totality of the research has been applied towards detail enhancing denoising of scanned data from a mobile range scanning system, and results from both synthetic and real models are presented
Image enhancement in digital X-ray angiography
Anyone who does not look back to the beginning throughout a
course of action, does not look forward to the end. Hence it
necessarily follows that an intention which looks ahead, depends
on a recollection which looks back.
| Aurelius Augustinus, De civitate Dei, VII.7 (417 A.D.)
Chapter 1
Introduction and Summary
D
espite the development of imaging techniques based on alternative physical
phenomena, such as nuclear magnetic resonance, emission of single photons
( -radiation) by radio-pharmaceuticals and photon pairs by electron-positron
annihilations, re ection of ultrasonic waves, and the Doppler eect, X-ray based im-
age acquisition is still daily practice in medicine. Perhaps this can be attributed to
the fact that, contrary to many other phenomena, X-rays lend themselves naturally
for registration by means of materials and methods widely available at the time of
their discovery | a fact that gave X-ray based medical imaging an at least 50-year
head start over possible alternatives. Immediately after the preliminary communica-
tion on the discovery of the \new light" by R¨ ontgen [317], late December 1895, the
possible applications of X-rays were investigated intensively. In 1896 alone, almost
one 1,000 articles about the new phenomenon appeared in print (Glasser [119] lists
all of them). Although most of the basics of the diagnostic as well as the therapeutic
uses of X-rays had been worked out by the end of that year [289], research on im-
proved acquisition and reduction of potential risks for humans continued steadily in
the century to follow. The development of improved X-ray tubes, rapid lm changers,
image intensiers, the introduction of television cameras into uoroscopy, and com-
puters in digital radiography and computerized tomography, formed a succession of
achievements which increased the diagnostic potential of X-ray based imaging.
One of the areas in medical imaging where X-rays have always played an im-
portant role is angiography,y which concerns the visualization of blood vessels in the
human body. As already suggested, research on the possibility of visualization of the
human vasculature was initiated shortly after the discovery of X-rays. A photograph
of a rst \angiogram" | obtained by injection of a mixture of chalk, red mercury,
and petroleum into an amputated hand, followed by almost an hour of exposure to
X-rays | was published as early as January 1896, by Hascheck & Lindenthal [139].
Although studies on cadavers led to greatly improved knowledge of the anatomy of
the human vascular system, angiography in living man for the purpose of diagnosis
and intervention became feasible only after substantial progress in the development
yA term originating from the Greek words o (aggeion), meaning \vessel" or \bucket", and
-' (graphein), meaning \to write" or \to record". 2 1 Introduction and Summary
of relatively safe contrast media and methods of administration, as well as advance-
ments in radiological equipment. Of special interest in the context of this thesis is
the improvement brought by photographic subtraction, a technique known since the
early 1900s and since then used successfully in e.g. astronomy, but rst introduced
in X-ray angiography in 1934, by Ziedses des Plantes [425, 426]. This technique al-
lowed for a considerable enhancement of vessel visibility by cancellation of unwanted
background structures. In the 1960s, the time consuming lm subtraction process
was replaced by analog video subtraction techniques [156, 275] which, with the in-
troduction of digital computers, gave rise to the development of digital subtraction
angiography [194] | a technique still considered by many the \gold standard" for de-
tection and quantication of vascular anomalies. Today, research on improved X-ray
based imaging techniques for angiography continues, witness the recent developments
in three-dimensional rotational angiography [88, 185, 186, 341,373].
The subject of this thesis is enhancement of digital X-ray angiography images. In
contrast with the previously mentioned developments, the emphasis is not on the
further improvement of image acquisition techniques, but rather on the development
and evaluation of digital image processing techniques for retrospective enhancement
of images acquired with existing techniques. In the context of this thesis, the term
\enhancement" must be regarded in a rather broad sense. It does not only refer
to improvement of image quality by reduction of disturbing artifacts and noise, but
also to minimization of possible image quality degradation and loss of quantitative
information, inevitably introduced by required image processing operations. These
two aspects of image enhancement will be claried further in a brief summary of each
of the chapters of this thesis.
The rst three chapters deal with the problem of patient motion artifacts in digital
subtraction angiography (DSA). In DSA imaging, a sequence of 2D digital X-ray
projection images is acquired, at a rate of e.g. two per second, following the injection
of contrast material into one of the arteries or veins feeding the part of the vasculature
to be diagnosed. Acquisition usually starts about one or two seconds prior to arrival
of the contrast bolus in the vessels of interest, so that the rst few images included
in the sequence do not show opacied vessels. In a subsequent post-processing step,
one of these \pre-bolus" images is then subtracted automatically from each of the
contrast images so as to mask out background structures such as bone and soft-
tissue shadows. However, it is clear that in the resulting digital subtraction images,
the unwanted background structures will have been removed completely only when
the patient lied perfectly still during acquisition of the original images. Since most
patients show at least some physical reaction to the passage of a contrast medium, this
proviso is generally not met. As a result, DSA images frequently show patient-motion
induced artifacts (see e.g. the bottom-left image in Fig. 1.1), which may in uence the
subsequent analysis and diagnosis carried out by radiologists.
Since the introduction of DSA, in the early 1980s, many solutions to the problem
of patient motion artifacts have been put forward. Chapter 2 presents an overview
of the possible types of motion artifacts reported in the literature and the techniques
that have been proposed to avoid them. The main purpose of that chapter is to
review and discuss the techniques proposed over the past two decades to correct for 1 Introduction and Summary 3
Figure 1.1. Example of creation and reduction of patient motion artifacts in
cerebral DSA imaging. Top left: a \pre-bolus" or mask image acquired just prior
to the arrival of the contrast medium. Top right: one of the contrast or live images
showing opacied vessels. Bottom left: DSA image obtained after subtraction of
the mask from the contrast image, followed by contrast enhancement. Due to patient
motion, the background structures in the mask and contrast image were not perfectly
aligned, as a result of which the DSA image does not only show blood vessels, but
also additional undesired structures (in this example primarily in the bottom-left
part of the image). Bottom right: DSA image resulting from subtraction of the
mask and contast image after application of the automatic registration algorithm
described in Chapter 3. 4 1 Introduction and Summary
patient motion artifacts retrospectively, by means of digital image processing. The
chapter addresses fundamental problems, such as whether it is possible to construct
a 2D geometrical transformation that exactly describes the projective eects of an
originally 3D transformation, as well as practical problems, such as how to retrieve
the correspondence between mask and contrast images by using only the grey-level
information contained in the images, and how to align the images according to that
correspondence in a computationally ecient manner.
The review in Chapter 2 reveals that there exists quite some literature on the
topic of (semi-)automatic image alignment, or image registration, for the purpose of
motion artifact reduction in DSA images. However, to the best of our knowledge,
research in this area has never led to algorithms which are suciently fast and robust
to be acceptable for routine use in clinical practice. By drawing upon the suggestions
put forward in Chapter 2, a new approach to automatic registration of digital X-ray
angiography images is presented in Chapter 3. Apart from describing the functionality
of the components of the algorithm, special attention is paid to their computationally
optimal implementation. The results of preliminary experiments described in that
chapter indicate that the algorithm is eective, very fast, and outperforms alterna-
tive approaches, in terms of both image quality and required computation time. It is
concluded that the algorithm is most eective in cerebral and peripheral DSA imag-
ing. An example of the image quality enhancement obtained after application of the
algorithm in the case of a cerebral DSA image is provided in Fig 1.1.
Chapter 4 reports on a clinical evaluation of the automatic registration technique.
The evaluation involved 104 cerebral DSA images, which were corrected for patient
motion artifacts by the automatic technique, as well as by pixel shifting | a manual
correction technique currently used in clinical practice. The quality of the DSA images
resulting from the two techniques was assessed by four observers, who compared the
images both mutually and to the corresponding original images. The results of the
evaluation presented in Chapter 4 indicate that the dierence in performance between
the two correction techniques is statistically signicant. From the results of the mutual
comparisons it is concluded that, on average, the automatic registration technique
performs either comparably, better than, or even much better than manual pixel
shifting in 95% of all cases. In the other 5% of the cases, the remaining artifacts are
located near the borders of the image, which are generally diagnostically non-relevant.
In addition, the results show that the automatic technique implies a considerable
reduction of post-processing time compared to manual pixel shifting (on average, one
second versus 12 seconds per DSA image).
The last two chapters deal with somewhat dierent topics. Chapter 5 is concerned
with visualization and quantication of vascular anomalies in three-dimensional rota-
tional angiography (3DRA). Similar to DSA imaging, 3DRA involves the acquisition
of a sequence of 2D digital X-ray projection images, following a single injection of
contrast material. Contrary to DSA, however, this sequence is acquired during a 180
rotation of the C-arch on which the X-ray source and detector are mounted antipo-
dally, with the object of interest positioned in its iso-center. The rotation is completed
in about eight seconds and the resulting image sequence typically contains 100 images,
which form the input to a ltered back-projection algorithm for 3D reconstruction. In
contrast with most other 3D medical imaging techniques, 3DRA is capable of provid- 1 Introduction and Summary 5
Figure 1.2. Visualizations of a clinical 3DRA dataset, illustrating the qualitative
improvement obtained after noise reduction ltering. Left: volume rendering of
the original, raw image. Right: volume rendering of the image after application
of edge-enhancing anisotropic diusion ltering (see Chapter 5 for a description of
this technique). The visualizations were obtained by using the exact same settings
for the parameters of the volume rendering algorithm.
ing high-resolution isotropic datasets. However, due to the relatively high noise level
and the presence of other unwanted background variations caused by surrounding
tissue, the use of noise reduction techniques is inevitable in order to obtain smooth
visualizations of these datasets (see Fig. 1.2). Chapter 5 presents an inquiry into the
eects of several linear and nonlinear noise reduction techniques on the visualization
and subsequent quantication of vascular anomalies in 3DRA images. The evalua-
tion is focussed on frequently occurring anomalies such as a narrowing (or stenosis)
of the internal carotid artery or a circumscribed dilation (or aneurysm) of intracra-
nial arteries. Experiments on anthropomorphic vascular phantoms indicate that, of
the techniques considered, edge-enhancing anisotropic diusion ltering is most suit-
able, although the practical use of this technique may currently be limited due to its
memory and computation-time requirements.
Finally, Chapter 6 addresses the problem of interpolation of sampled data, which
occurs e.g. when applying geometrical transformations to digital medical images for
the purpose of registration or visualization. In most practical situations, interpola-
tion of a sampled image followed by resampling of the resulting continuous image
on a geometrically transformed grid, inevitably implies loss of grey-level information,
and hence image degradation, the amount of which is dependent on image content,
but also on the employed interpolation scheme (see Fig. 1.3). It follows that the
choice for a particular interpolation scheme is important, since it in uences the re-
sults of registrations and visualizations, and the outcome of subsequent quantitative
analyses which rely on grey-level information contained in transformed images. Al-
though many interpolation techniques have been developed over the past decades, 6 1 Introduction and Summary
Figure 1.3. Illustration of the fact that the loss of information due to interpola-
tion and resampling operations is dependent on the employed interpolation scheme.
Left: slice of a 3DRA image after rotation over 5:0, by using linear interpolation.
Middle: the same slice, after rotation by using cubic spline interpolation. Right:
the dierence between the two rotated images. Although it is not possible with
such a comparison to come to conclusions as to which of the two methods yields
the smallest loss of grey-level information, this example clearly illustrates the point
that dierent interpolation methods usually yield dierent results.
thorough quantitative evaluations and comparisons of these techniques for medical
image transformation problems are still lacking. Chapter 6 presents such a compar-
ative evaluation. The study is limited to convolution-based interpolation techniques,
as these are most frequently used for registration and visualization of medical image
data. Because of the ubiquitousness of interpolation in medical image processing and
analysis, the study is not restricted to XRA and 3DRA images, but also includes
datasets from many other modalities. It is concluded that for all modalities, spline
interpolation constitutes the best trade-o between accuracy and computational cost,
and therefore is to be preferred over all other methods.
In summary, this thesis is concerned with the improvement of image quality and the
reduction of image quality degradation and loss of quantitative information. The
subsequent chapters describe techniques for reduction of patient motion artifacts in
DSA images, noise reduction techniques for improved visualization and quantication
of vascular anomalies in 3DRA images, and interpolation techniques for the purpose
of accurate geometrical transformation of medical image data. The results and con-
clusions of the evaluations described in this thesis provide general guidelines for the
applicability and practical use of these techniques
Otimização do fronthaul ótico para redes de acesso de rádio (baseadas) em computação em nuvem (CC-RANs)
Doutoramento conjunto (MAP-Tele) em Engenharia Eletrotécnica/TelecomunicaçõesA proliferação de diversos tipos de dispositivos moveis, aplicações e serviços
com grande necessidade de largura de banda têm contribuído para o aumento
de ligações de banda larga e ao aumento do volume de trafego das
redes de telecomunicações moveis. Este aumento exponencial tem posto
uma enorme pressão nos mobile operadores de redes móveis (MNOs). Um
dos aspetos principais deste recente desenvolvimento, é a necessidade que as
redes têm de oferecer baixa complexidade nas ligações, como também baixo
consumo energético, muito baixa latência e ao mesmo tempo uma grande
capacidade por baixo usto. De maneira a resolver estas questões, os MNOs
têm focado a sua atenção na redes de acesso por rádio em nuvem (C-RAN)
principalmente devido aos seus benefícios em termos de otimização de performance
e relação qualidade preço. O standard para a distribuição de sinais
sem fios por um fronthaul C-RAN é o common public radio interface (CPRI).
No entanto, ligações óticas baseadas em interfaces CPRI necessitam de uma
grande largura de banda. Estes requerimentos podem também ser atingidos
com uma implementação em ligação free space optical (FSO) que é um sistema
ótico que usa comunicação sem fios. O FSO tem sido uma alternativa
muito apelativa aos sistemas de comunicação rádio (RF) pois combinam a
flexibilidade e mobilidade das redes RF ao mesmo tempo que permitem a
elevada largura de banda permitida pelo sistema ótico. No entanto, as ligações
FSO são suscetíveis a alterações atmosféricas que podem prejudicar
o desempenho do sistema de comunicação. Estas limitações têm evitado o
FSO de ser tornar uma excelente solução para o fronthaul. Uma caracterização
precisa do canal e tecnologias mais avançadas são então necessárias
para uma implementação pratica de ligações FSO. Nesta tese, vamos estudar
uma implementação eficiente para fronthaul baseada em tecnologia
á rádio-sobre-FSO (RoFSO). Propomos expressões em forma fechada para
mitigação das perdas de propagação e para a estimação da capacidade do
canal de maneira a aliviar a complexidade do sistema de comunicação. Simulações
numéricas são também apresentadas para formatos de modulação
adaptativas. São também considerados esquemas como um sistema hibrido
RF/FSO e tecnologias de transmissão apoiadas por retransmissores
que ajudam a alivar os requerimentos impostos por um backhaul/fronthaul
de C-RAN. Os modelos propostos não só reduzem o esforço computacional,
como também têm outros méritos, tais como, uma elevada precisão na estimação
do canal e desempenho, baixo requisitos na capacidade de memória
e uma rápida e estável operação comparativamente com o estado da arte
em sistemas analíticos (PON)-FSO. Este sistema é implementado num recetor
em tempo real que é emulado através de uma field-programmable gate
array (FPGA) comercial. Permitindo assim um sistema aberto, interoperabilidade,
portabilidade e também obedecer a standards de software aberto.
Os esquemas híbridos têm a habilidade de suportar diferentes aplicações,
serviços e múltiplos operadores a partilharem a mesma infraestrutura de
fibra ótica.The proliferation of different mobile devices, bandwidth-intensive applications
and services contribute to the increase in the broadband connections
and the volume of traffic on the mobile networks. This exponential growth
has put considerable pressure on the mobile network operators (MNOs). In
principal, there is a need for networks that not only offer low-complexity,
low-energy consumption, and extremely low-latency but also high-capacity
at relatively low cost. In order to address the demand, MNOs have given significant
attention to the cloud radio access network (C-RAN) due to its beneficial
features in terms of performance optimization and cost-effectiveness.
The de facto standard for distributing wireless signal over the C-RAN fronthaul
is the common public radio interface (CPRI). However, optical links
based on CPRI interfaces requires large bandwidth. Also, the aforementioned
requirements can be realized with the implementation of free space
optical (FSO) link, which is an optical wireless system. The FSO is an appealing
alternative to the radio frequency (RF) communication system that
combines the flexibility and mobility offered by the RF networks with the
high-data rates provided by the optical systems. However, the FSO links are
susceptible to atmospheric impairments which eventually hinder the system
performance. Consequently, these limitations prevent FSO from being an
efficient standalone fronthaul solution. So, precise channel characterizations
and advanced technologies are required for practical FSO link deployment
and operation. In this thesis, we study an efficient fronthaul implementation
that is based on radio-on-FSO (RoFSO) technologies. We propose closedform
expressions for fading-mitigation and for the estimation of channel
capacity so as to alleviate the system complexity. Numerical simulations
are presented for adaptive modulation scheme using advanced modulation
formats. We also consider schemes like hybrid RF/FSO and relay-assisted
transmission technologies that can help in alleviating the stringent requirements
by the C-RAN backhaul/fronthaul. The propose models not only
reduce the computational requirements/efforts, but also have a number of
diverse merits such as high-accuracy, low-memory requirements, fast and
stable operation compared to the current state-of-the-art analytical based
approaches. In addition to the FSO channel characterization, we present
a proof-of-concept experiment in which we study the transmission capabilities
of a hybrid passive optical network (PON)-FSO system. This is
implemented with the real-time receiver that is emulated by a commercial
field-programmable gate array (FPGA). This helps in facilitating an
open system and hence enables interoperability, portability, and open software
standards. The hybrid schemes have the ability to support different
applications, services, and multiple operators over a shared optical fiber
infrastructure
Recent Application in Biometrics
In the recent years, a number of recognition and authentication systems based on biometric measurements have been proposed. Algorithms and sensors have been developed to acquire and process many different biometric traits. Moreover, the biometric technology is being used in novel ways, with potential commercial and practical implications to our daily activities. The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in biometrics. The topics covered in this book reflect well both aspects of development. They include biometric sample quality, privacy preserving and cancellable biometrics, contactless biometrics, novel and unconventional biometrics, and the technical challenges in implementing the technology in portable devices. The book consists of 15 chapters. It is divided into four sections, namely, biometric applications on mobile platforms, cancelable biometrics, biometric encryption, and other applications. The book was reviewed by editors Dr. Jucheng Yang and Dr. Norman Poh. We deeply appreciate the efforts of our guest editors: Dr. Girija Chetty, Dr. Loris Nanni, Dr. Jianjiang Feng, Dr. Dongsun Park and Dr. Sook Yoon, as well as a number of anonymous reviewers
The Lunar Scout Program: An international program to survey the Moon from orbit for geochemistry, mineralogy, imagery, geodesy, and gravity
The Lunar Scout Program was one of a series of attempts by NASA to develop and fly an orbiting mission to the moon to collect geochemical, geological, and gravity data. Predecessors included the Lunar Observer, the Lunar Geochemical Orbiter, and the Lunar Polar Orbiter - missions studied under the auspices of the Office of Space Science. The Lunar Scout Program, however, was an initiative of the Office of Exploration. It was begun in late 1991 and was transferred to the Office of Space Science after the Office of Exploration was disbanded in 1993. Most of the work was done by a small group of civil servants at the Johnson Space Center; other groups also responsible for mission planning included personnel from the Charles Stark Draper Laboratories, the Lawrence Livermore National Laboratory, Boeing, and Martin Marietta. The Lunar Scout Program failed to achieve new start funding in FY 93 and FY 94 as a result of budget downturns, the de-emphasis of the Space Exploration Initiative, and the fact that lunar science did not rate as high a priority as other planned planetary missions, and was cancelled. The work done on the Lunar Scout Program and other lunar orbiter studies, however, represents assets that will be useful in developing new approaches to lunar orbit science
- …