9,582 research outputs found
Integration studies of RF solid-state generators in the electrical system of NBTF experiments and ITER HNB
SPIDER operation, started in 2018, pointed out performance-limiting issues
caused by the technology employed in RF generators, based on tetrode
free-running oscillators. One of these limits, namely the onset of frequency
instabilities, prevented operation at the full rated power of 200 kW. In
addition, tetrodes require high voltage to operate, which translates to risk of
flashovers and the necessity to perform conditioning procedures, limiting the
overall reliability. These disadvantages, combined with the positive experience
gained in the meanwhile on smaller facilities with solid state amplifiers, led
to the proposal of a complete re-design of the radiofrequency power supplies.
This paper describes the modelling activities used to define specifications and
design criteria of the solid-state amplifiers for SPIDER and MITICA, which can
be directly transposed to the ITER HNB units when their functionality is
proven. We detail the topology of the generators, consisting of class D
amplifier modules combined to achieve the required 200 kW, which design is
mainly driven by the necessity to deliver nominal power to the ion source,
mitigate the risk of obsolescence, and improve the reliability through
modularity. Due to the non-standard application, we gave particular focus to
the integration of generators in the RF systems of SPIDER and MITICA. Numerical
analyses were performed to verify the impact of harmonic distortion on
transmission line and RF load components, to address the effect of mutual
coupling between RF circuits on the generator output modulation, and to assess
the magnitude of common mode currents in the electric system. These studies, as
well as the experience gained from SPIDER operation, helped to define dedicated
circuit design provisions and control strategies, which are currently being
implemented in the detailed design and construction phase of the new RF
amplifiers
Novel substrate integrated waveguides and components
This thesis examines the properties of novel waveguides at mm-wave frequencies as advanced alternatives to the conventional components used today in research as well as in commercial applications. The analysis begins with the folded waveguide, a space saving substitute for the well known Rectangular Waveguide (RWG). Folded waveguides are dielectric-filled metallic structures that preserve the original modes of a rectangular waveguide in a more compact geometry. There are two types of folded waveguides, type 1 and type 2 both of which are narrower than RWGs. Furthermore it is proved that the bandwidth characteristics of type 1 are by far superior to those of an RWG. Due to the closed nature of folded structures the dispersion characteristics are identical to those of RWG. This thesis presents design equations for type 1 and type 2 guides and discusses their fabrication process as well as their ability to form multilayer stacks with even greater benefits in bandwidth and reduced dimensions. By introducing discontinuities in the transmission line of a folded guide we create resonating cavities with controllable response i.e. a filter. Hence it is shown that folded guides form the basis for a new class of folded-based components with the benefit of small widths. Another novel type of waveguide analysed in this thesis is the Non Radiative Perforated Dielectric Waveguide (NRPD). The structure is based on the operating principle of the conventional Nonradiative Dielectric Waveguide (NRD) but, instead of air, the slab is surrounded by perforated dielectric. Our structure uses the same theory as the NRD with the only difference being the value of the surrounding permittivity which is equal to the equivalent permittivity of the perforated lattice. NRPD shows manufacturing superiority over conventional NRDs and allows the fabrication of NRPD components such as filters, based on the resonating cavity principle
Evaluation of image quality and reconstruction parameters in recent PET-CT and PET-MR systems
In this PhD dissertation, we propose to evaluate the impact of using different PET isotopes for
the National Electrical Manufacturers Association (NEMA) tests performance evaluation of the
GE Signa integrated PET/MR. The methods were divided into three closely related categories:
NEMA performance measurements, system modelling and evaluation of the image quality of
the state-of-the-art of clinical PET scanners. NEMA performance measurements for
characterizing spatial resolution, sensitivity, image quality, the accuracy of attenuation and
scatter corrections, and noise equivalent count rate (NECR) were performed using clinically
relevant and commercially available radioisotopes. Then we modelled the GE Signa integrated
PET/MR system using a realistic GATE Monte Carlo simulation and validated it with the result of
the NEMA measurements (sensitivity and NECR). Next, the effect of the 3T MR field on the
positron range was evaluated for F-18, C-11, O-15, N-13, Ga-68 and Rb-82. Finally, to evaluate the image
quality of the state-of-the-art clinical PET scanners, a noise reduction study was performed
using a Bayesian Penalized-Likelihood reconstruction algorithm on a time-of-flight PET/CT
scanner to investigate whether and to what extent noise can be reduced. The outcome of this
thesis will allow clinicians to reduce the PET dose which is especially relevant for young
patients. Besides, the Monte Carlo simulation platform for PET/MR developed for this thesis will
allow physicists and engineers to better understand and design integrated PET/MR systems
Machine learning enabled millimeter wave cellular system and beyond
Millimeter-wave (mmWave) communication with advantages of abundant bandwidth and immunity to interference has been deemed a promising technology for the next generation network and beyond. With the help of mmWave, the requirements envisioned of the future mobile network could be met, such as addressing the massive growth required in coverage, capacity as well as traffic, providing a better quality of service and experience to users, supporting ultra-high data rates and reliability, and ensuring ultra-low latency. However, due to the characteristics of mmWave, such as short transmission distance, high sensitivity to the blockage, and large propagation path loss, there are some challenges for mmWave cellular network design. In this context, to enjoy the benefits from the mmWave networks, the architecture of next generation cellular network will be more complex. With a more complex network, it comes more complex problems. The plethora of possibilities makes planning and managing a complex network system more difficult. Specifically, to provide better Quality of Service and Quality of Experience for users in the such network, how to provide efficient and effective handover for mobile users is important. The probability of handover trigger will significantly increase in the next generation network, due to the dense small cell deployment. Since the resources in the base station (BS) is limited, the handover management will be a great challenge. Further, to generate the maximum transmission rate for the users, Line-of-sight (LOS) channel would be the main transmission channel. However, due to the characteristics of mmWave and the complexity of the environment, LOS channel is not feasible always. Non-line-of-sight channel should be explored and used as the backup link to serve the users. With all the problems trending to be complex and nonlinear, and the data traffic dramatically increasing, the conventional method is not effective and efficiency any more. In this case, how to solve the problems in the most efficient manner becomes important.
Therefore, some new concepts, as well as novel technologies, require to be explored. Among them, one promising solution is the utilization of machine learning (ML) in the mmWave cellular network. On the one hand, with the aid of ML approaches, the network could learn from the mobile data and it allows the system to use adaptable strategies while avoiding unnecessary human intervention. On the other hand, when ML is integrated in the network, the complexity and workload could be reduced, meanwhile, the huge number of devices and data could be efficiently managed.
Therefore, in this thesis, different ML techniques that assist in optimizing different areas in the mmWave cellular network are explored, in terms of non-line-of-sight (NLOS) beam tracking, handover management, and beam management. To be specific, first of all, a procedure to predict the angle of arrival (AOA) and angle of departure (AOD) both in azimuth and elevation in non-line-of-sight mmWave communications based on a deep neural network is proposed. Moreover, along with the AOA and AOD prediction, a trajectory prediction is employed based on the dynamic window approach (DWA). The simulation scenario is built with ray tracing technology and generate data. Based on the generated data, there are two deep neural networks (DNNs) to predict AOA/AOD in the azimuth (AAOA/AAOD) and AOA/AOD in the elevation (EAOA/EAOD). Furthermore, under an assumption that the UE mobility and the precise location is unknown, UE trajectory is predicted and input into the trained DNNs as a parameter to predict the AAOA/AAOD and EAOA/EAOD to show the performance under a realistic assumption. The robustness of both procedures is evaluated in the presence of errors and conclude that DNN is a promising tool to predict AOA and AOD in a NLOS scenario. Second, a novel handover scheme is designed aiming to optimize the overall system throughput and the total system delay while guaranteeing the quality of service (QoS) of each user equipment (UE). Specifically, the proposed handover scheme called O-MAPPO integrates the reinforcement learning (RL) algorithm and optimization theory. An RL algorithm known as multi-agent proximal policy optimization (MAPPO) plays a role in determining handover trigger conditions. Further, an optimization problem is proposed in conjunction with MAPPO to select the target base station and determine beam selection. It aims to evaluate and optimize the system performance of total throughput and delay while guaranteeing the QoS of each UE after the handover decision is made.
Third, a multi-agent RL-based beam management scheme is proposed, where multiagent deep deterministic policy gradient (MADDPG) is applied on each small-cell base station (SCBS) to maximize the system throughput while guaranteeing the quality of service. With MADDPG, smart beam management methods can serve the UEs more efficiently and accurately. Specifically, the mobility of UEs causes the dynamic changes of the network environment, the MADDPG algorithm learns the experience of these changes. Based on that, the beam management in the SCBS is optimized according the reward or penalty when severing different UEs. The approach could improve the overall system throughput and delay performance compared with traditional beam management methods.
The works presented in this thesis demonstrate the potentiality of ML when addressing the problem from the mmWave cellular network. Moreover, it provides specific solutions for optimizing NLOS beam tracking, handover management and beam management. For NLOS beam tracking part, simulation results show that the prediction errors of the AOA and AOD can be maintained within an acceptable range of ±2. Further, when it comes to the handover optimization part, the numerical results show the system throughput and delay are improved by 10% and 25%, respectively, when compared with two typical RL algorithms, Deep Deterministic Policy Gradient (DDPG) and Deep Q-learning (DQL). Lastly, when it considers the intelligent beam management part, numerical results reveal the convergence performance of the MADDPG and the superiority in improving the system throughput compared with other typical RL algorithms and the traditional beam management method
Optimisation of Triboelectric Nanogenerator performance in vertical contact-separation mode
Triboelectric nanogenerator (TENG) is one of the most promising energy harvesters – a technology that uses repeated or reciprocating contact of suitably chosen materials to generate charge via the triboelectric effect (TE) and utilizes this as usable voltage and current. TENGs are attractive as they can continuously generate charge over a wide range of operating conditions and have several valuable advantages such as light weight, simple structure, low cost and high efficiency. Therefore, TENGs have been explored in a wide range of applications, including self-powered wearable electronics, powering electronics and even for harvesting ocean wave/wind energy. One of the major limitations of TENGs is their low power output (usually <500 W/m2). This thesis focuses of a few specific approaches to optimising TENG output performance. This thesis begins by presenting a solution to this challenge by optimizing a low permittivity substrate beneath the tribo-contact layer. The open circuit voltage is found to increase by a factor of 1.3 in moving from PET to the lower permittivity PTFE. TENG performance is also believed to depend on contact force, but the origin of the dependence had not previously been explored. Herein, we show that this behaviour results from a contact force dependent real contact area Ar as governed by surface roughness. The open circuit voltage Voc, short circuit current Isc and Ar for a TENG were found to increase with contact force/pressure. Critically, Voc and Isc saturate at the same contact pressure as Ar suggesting that electrical output follows the same evolution as Ar. Assuming that tribo charges can only transfer across the interface at areas of real contact, it follows that an increasing Ar with contact pressure should produce a corresponding increase in the electrical output. These results underline the importance of accounting for real contact area in TENG design, as well as the distinction between real and nominal contact area in tribo-charge density definition. High-performance ferroelectricassisted TENGs (Fe-TENGs) are developed using electrospun fibrous surfaces based on P(VDFTrFE) with dispersed BaTiO3 (BTO) nanofillers in either cubic (CBTO) or tetragonal (TBTO) form in this thesis. TENGs with three types of tribo-negative surface were investigated and output increased progressively. Critically, P(VDF-TrFE)/TBTO produced higher output than P(VDFTrFE)/ CBTO even though permittivity is nearly identical. Thus, it is shown that BTO fillers boost output, not just by increasing permittivity, but also by enhancing the crystallinity and amount of the β-phase (as TBTO produced a more crystalline β-phase present in greater amounts)
Full stack development toward a trapped ion logical qubit
Quantum error correction is a key step toward the construction of a large-scale quantum computer, by preventing small infidelities in quantum gates from accumulating over the course of an algorithm. Detecting and correcting errors is achieved by using multiple physical qubits to form a smaller number of robust logical
qubits. The physical implementation of a logical qubit requires multiple qubits, on which high fidelity gates
can be performed.
The project aims to realize a logical qubit based on ions confined on a microfabricated surface trap. Each
physical qubit will be a microwave dressed state qubit based on 171Yb+ ions. Gates are intended to be realized through RF and microwave radiation in combination with magnetic field gradients. The project vertically integrates software down to hardware compilation layers in order to deliver, in the near future, a fully functional small device demonstrator.
This thesis presents novel results on multiple layers of a full stack quantum computer model. On the hardware level a robust quantum gate is studied and ion displacement over the X-junction geometry is demonstrated.
The experimental organization is optimized through automation and compressed waveform data transmission. A new quantum assembly language purely dedicated to trapped ion quantum computers is introduced. The demonstrator is aimed at testing implementation of quantum error correction codes while preparing for larger
scale iterations.Open Acces
Recommended from our members
Brain signal recognition using deep learning
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel UniversityBrain Computer Interface (BCI) has the potential to offer a new generation of applications independent of
muscular activity and controlled by the human brain. Brain imaging technologies are used to transfer the
cognitive tasks into control commands for a BCI system. The electroencephalography (EEG) technology
serves as the best available non-invasive solution for extracting signals from the brain. On the other hand,
speech is the primary means of communication, but for patients suffering from locked-in syndrome, there
is no easy way to communicate. Therefore, an ideal communication system for locked-in patients is a
thought-to-speech BCI system.
This research aims to investigate methods for the recognition of imagined speech from EEG signals
using deep learning techniques. In order to design an optimal imagined speech recognition BCI, variety
of issues have been solved. These include 1) proposing new feature extraction and classification
framework for recognition of imagined speech from EEG signals, 2) grammatical class recognition of
imagined words from EEG signals, 3) discriminating different cognitive tasks associated with speech in
the brain such as overt speech, covert speech, and visual imagery. In this work machine learning, deep
learning methods were used to analyze EEG signals.
For recognition of imagined speech from EEG signals, a new EEG database was collected while the
participants mentally spoke (imagined speech) the presented words. Along with imagined speech, EEG
data was recorded for visual imagery (imagining a scene or an image) and overt speech (verbal speech).
Spectro-temporal and spatio-temporal domain features were investigated for the classification of imagined
words from EEG signals. Further, a deep learning framework using the convolutional network
and attention mechanism was implemented for learning features in the spatial, temporal, and spectral
domains. The method achieved a recognition rate of 76.6% for three binary word pairs. These experiments
show that deep learning algorithms are ideal for imagined speech recognition from EEG signals
due to their ability to interpret features from non-linear and non-stationary signals. Grammatical classes
of imagined words from EEG signals were also recognized using a multi-channel convolution network
framework. This method was extended to a multi-level recognition system for multi-class classification
of imagined words which achieved an accuracy of 52.9% for 10 words, which is much better in
comparison to previous work.
In order to investigate the difference between imagined speech with verbal speech and visual imagery
from EEG signals, we used multivariate pattern analysis (MVPA). MVPA provided the time segments
when the neural oscillation for the different cognitive tasks was linearly separable. Further, frequencies
that result in most discrimination between the different cognitive tasks were also explored. A framework
was proposed to discriminate two cognitive tasks based on the spatio-temporal patterns in EEG signals.
The proposed method used the K-means clustering algorithm to find the best electrode combination and
convolutional-attention network for feature extraction and classification. The proposed method achieved
a high recognition rate of 82.9% and 77.7%.
The results in this research suggest that a communication based BCI system can be designed using
deep learning methods. Further, this work add knowledge to the existing work in the field of communication
based BCI system
A Cryogenically-Cooled High-Sensitivity Nuclear Quadrupole Resonance Spectrometer
The paper describes a radio frequency (RF) spectrometer for 14N nuclear
quadrupole resonance (NQR) spectroscopy that uses a detector coil cooled to 77
K to maximize measurement sensitivity. The design uses a minimally-intrusive
network of active duplexers and mechanical contact switches to realize a
digitally reconfigurable series/parallel coil tuning network that allows
transmit- and receive-mode performance to be independently optimized. The
design is battery-powered and includes a mixed-signal embedded system to
monitor and control secondary processes, thus enabling autonomous operation.
Tests on an acetaminophen sample show that cooling both the detector and sample
increases the signal-to-noise ratio (SNR) per scan by a factor of approximately
88 (in power units), in good agreement with theoretical predictions.Comment: Submitted to Review of Scientific Instrument
Optical coherence tomography methods using 2-D detector arrays
Optical coherence tomography (OCT) is a non-invasive, non-contact optical technique that allows cross-section imaging of biological tissues with high spatial resolution, high sensitivity and high dynamic range. Standard OCT uses a focused beam to illuminate a point on the target and detects the signal using a single photodetector. To acquire transverse information, transversal scanning of the illumination point is required. Alternatively, multiple OCT channels can be operated in parallel simultaneously; parallel OCT signals are recorded by a two-dimensional (2D) detector array. This approach is known as Parallel-detection OCT. In this thesis, methods, experiments and results using three parallel OCT techniques, including full -field (time-domain) OCT (FF-OCT), full-field swept-source OCT (FF-SS-OCT) and line-field Fourier-domain OCT (LF-FD-OCT), are presented. Several 2D digital cameras of different formats have been used and evaluated in the experiments of different methods. With the LF-FD-OCT method, photography equipment, such as flashtubes and commercial DSLR cameras have been equipped and tested for OCT imaging. The techniques used in FF-OCT and FF-SS-OCT are employed in a novel wavefront sensing technique, which combines OCT methods with a Shack-Hartmann wavefront sensor (SH-WFS). This combination technique is demonstrated capable of measuring depth-resolved wavefront aberrations, which has the potential to extend the applications of SH-WFS in wavefront-guided biomedical imaging techniques
- …