12,757 research outputs found

    Scaling up integrated photonic reservoirs towards low-power high-bandwidth computing

    No full text

    Variable optical elements for fast focus control

    Full text link
    In this Review, we survey recent developments in the emerging field of high-speed variable-z-focus optical elements, which are driving important innovations in advanced imaging and materials processing applications. Three-dimensional biomedical imaging, high-throughput industrial inspection, advanced spectroscopies, and other optical characterization and materials modification methods have made great strides forward in recent years due to precise and rapid axial control of light. Three state-of-the-art key optical technologies that enable fast z-focus modulation are reviewed, along with a discussion of the implications of the new developments in variable optical elements and their impact on technologically relevant applications

    Quantum dots based superluminescent diodes and photonic crystal surface emitting lasers

    Get PDF
    This thesis reports the design, fabrication, and electrical and optical characterisations of GaAs-based quantum dot (QD) photonic devices, specifically focusing on superluminescent diodes (SLDs) and photonic crystal surface-emitting lasers (PCSELs). The integration of QD active regions in these devices is advantageous due to their characteristics such as temperature insensitivity, feedback insensitivity, and ability to utilise the ground state (GS) and excited state (ES) of the dots. In an initial study concerning the fabrication of QD-SLDs, the influence of ridge waveguide etch depth on the electrical and optical properties of the devices are investigated. It is shown that the output power and modal gain from shallow etched ridge waveguide is higher than those of deep etched waveguides. Subsequently, the thermal performance of the devices is analysed. With increased temperature over 170 ÂșC, the spectral bandwidth is dramatically increased by thermally excited carrier transition in excited states of the dots. Following this, an investigation of a high dot density hybrid quantum well/ quantum dot (QW/QD) active structure for broadband, high-modal gain SLDs is presented. The influence of the number of QD layers on the modal gain of hybrid QW/QD structures is analysed. It is shown that higher number of dot layer provides higher modal gain value, however, there is lack of emission from QW due to the requirement of large number of carriers to saturate the QD. Additionally, a comparison is made between “unchirped QD” and “ chirped QD” of hybrid QW/QD structure in terms of modal gain and spectral bandwidth. It is showed that “chirped” of the QD can improve the “flatness” of the spectral bandwidth. Lastly, the use of self-assembled InAs QD as the active material in epitaxially regrown GaAs-based PCSELs is explored for the first time. Initially, it is shown that both GS and ES lasing can be achieved for QD-PCSELs by changing the grating period of the photonic crystal (PC). The careful design of these grating periods allows lasing from neighbouring devices at GS ( ~1230 nm) and ES (~1140 nm), 90 nm apart in wavelength. Following this, the effect of device area, PC etch depth, PC atom shape (circle or triangle or orientation) on lasing performance is presented. It is shown that lower threshold current density and higher slope efficiencies is achieved with increasing the device size. The deeper PC height device has higher output power due to more suitable height and minimal distance to active region. The triangular atom shape has slightly higher slope efficiency compared to triangular atom shape which is attributed to breaking in-plane symmetry and increase out-of-plane emission

    Electronic and photonic integrated circuits for millimeter wave-over-fiber

    No full text

    High-speed operation of fiber-optic link impaired by wind gusts

    Get PDF
    The fast signal transmission is critical long-haul communication systems. They represent the key advancements, shaping information-communication technologies. Fiber-optic transmission suffers from many degradation effects, and of particular concern are stochastic fiber impairments represented by polarization mode dispersion (PMD). The PMD is critical as it limits link operation at data rates higher than 10 Gbps. In this work, we report on experimental measurements and theoretical analysis characterization for PMD-based propagation effects in optical fibers influenced by wind gusts. The study was performed on fiber-optic link that runs through 111-km-long optical power ground wire cables. Measured maximum of DGD was up to 10 ps for a wind speed of 20 m/s. This wind condition, the optical link maintained a reliable operation only for established 10 Gbps, while considerable link degradation was seen for data rates of between 40 and 100 Gbps

    Machine learning enabled millimeter wave cellular system and beyond

    Get PDF
    Millimeter-wave (mmWave) communication with advantages of abundant bandwidth and immunity to interference has been deemed a promising technology for the next generation network and beyond. With the help of mmWave, the requirements envisioned of the future mobile network could be met, such as addressing the massive growth required in coverage, capacity as well as traffic, providing a better quality of service and experience to users, supporting ultra-high data rates and reliability, and ensuring ultra-low latency. However, due to the characteristics of mmWave, such as short transmission distance, high sensitivity to the blockage, and large propagation path loss, there are some challenges for mmWave cellular network design. In this context, to enjoy the benefits from the mmWave networks, the architecture of next generation cellular network will be more complex. With a more complex network, it comes more complex problems. The plethora of possibilities makes planning and managing a complex network system more difficult. Specifically, to provide better Quality of Service and Quality of Experience for users in the such network, how to provide efficient and effective handover for mobile users is important. The probability of handover trigger will significantly increase in the next generation network, due to the dense small cell deployment. Since the resources in the base station (BS) is limited, the handover management will be a great challenge. Further, to generate the maximum transmission rate for the users, Line-of-sight (LOS) channel would be the main transmission channel. However, due to the characteristics of mmWave and the complexity of the environment, LOS channel is not feasible always. Non-line-of-sight channel should be explored and used as the backup link to serve the users. With all the problems trending to be complex and nonlinear, and the data traffic dramatically increasing, the conventional method is not effective and efficiency any more. In this case, how to solve the problems in the most efficient manner becomes important. Therefore, some new concepts, as well as novel technologies, require to be explored. Among them, one promising solution is the utilization of machine learning (ML) in the mmWave cellular network. On the one hand, with the aid of ML approaches, the network could learn from the mobile data and it allows the system to use adaptable strategies while avoiding unnecessary human intervention. On the other hand, when ML is integrated in the network, the complexity and workload could be reduced, meanwhile, the huge number of devices and data could be efficiently managed. Therefore, in this thesis, different ML techniques that assist in optimizing different areas in the mmWave cellular network are explored, in terms of non-line-of-sight (NLOS) beam tracking, handover management, and beam management. To be specific, first of all, a procedure to predict the angle of arrival (AOA) and angle of departure (AOD) both in azimuth and elevation in non-line-of-sight mmWave communications based on a deep neural network is proposed. Moreover, along with the AOA and AOD prediction, a trajectory prediction is employed based on the dynamic window approach (DWA). The simulation scenario is built with ray tracing technology and generate data. Based on the generated data, there are two deep neural networks (DNNs) to predict AOA/AOD in the azimuth (AAOA/AAOD) and AOA/AOD in the elevation (EAOA/EAOD). Furthermore, under an assumption that the UE mobility and the precise location is unknown, UE trajectory is predicted and input into the trained DNNs as a parameter to predict the AAOA/AAOD and EAOA/EAOD to show the performance under a realistic assumption. The robustness of both procedures is evaluated in the presence of errors and conclude that DNN is a promising tool to predict AOA and AOD in a NLOS scenario. Second, a novel handover scheme is designed aiming to optimize the overall system throughput and the total system delay while guaranteeing the quality of service (QoS) of each user equipment (UE). Specifically, the proposed handover scheme called O-MAPPO integrates the reinforcement learning (RL) algorithm and optimization theory. An RL algorithm known as multi-agent proximal policy optimization (MAPPO) plays a role in determining handover trigger conditions. Further, an optimization problem is proposed in conjunction with MAPPO to select the target base station and determine beam selection. It aims to evaluate and optimize the system performance of total throughput and delay while guaranteeing the QoS of each UE after the handover decision is made. Third, a multi-agent RL-based beam management scheme is proposed, where multiagent deep deterministic policy gradient (MADDPG) is applied on each small-cell base station (SCBS) to maximize the system throughput while guaranteeing the quality of service. With MADDPG, smart beam management methods can serve the UEs more efficiently and accurately. Specifically, the mobility of UEs causes the dynamic changes of the network environment, the MADDPG algorithm learns the experience of these changes. Based on that, the beam management in the SCBS is optimized according the reward or penalty when severing different UEs. The approach could improve the overall system throughput and delay performance compared with traditional beam management methods. The works presented in this thesis demonstrate the potentiality of ML when addressing the problem from the mmWave cellular network. Moreover, it provides specific solutions for optimizing NLOS beam tracking, handover management and beam management. For NLOS beam tracking part, simulation results show that the prediction errors of the AOA and AOD can be maintained within an acceptable range of ±2. Further, when it comes to the handover optimization part, the numerical results show the system throughput and delay are improved by 10% and 25%, respectively, when compared with two typical RL algorithms, Deep Deterministic Policy Gradient (DDPG) and Deep Q-learning (DQL). Lastly, when it considers the intelligent beam management part, numerical results reveal the convergence performance of the MADDPG and the superiority in improving the system throughput compared with other typical RL algorithms and the traditional beam management method

    Underwater optical wireless communications in turbulent conditions: from simulation to experimentation

    Get PDF
    Underwater optical wireless communication (UOWC) is a technology that aims to apply high speed optical wireless communication (OWC) techniques to the underwater channel. UOWC has the potential to provide high speed links over relatively short distances as part of a hybrid underwater network, along with radio frequency (RF) and underwater acoustic communications (UAC) technologies. However, there are some difficulties involved in developing a reliable UOWC link, namely, the complexity of the channel. The main focus throughout this thesis is to develop a greater understanding of the effects of the UOWC channel, especially underwater turbulence. This understanding is developed from basic theory through to simulation and experimental studies in order to gain a holistic understanding of turbulence in the UOWC channel. This thesis first presents a method of modelling optical underwater turbulence through simulation that allows it to be examined in conjunction with absorption and scattering. In a stationary channel, this turbulence induced scattering is shown to cause and increase both spatial and temporal spreading at the receiver plane. It is also demonstrated using the technique presented that the relative impact of turbulence on a received signal is lower in a highly scattering channel, showing an in-built resilience of these channels. Received intensity distributions are presented confirming that fluctuations in received power from this method follow the commonly used Log-Normal fading model. The impact of turbulence - as measured using this new modelling framework - on link performance, in terms of maximum achievable data rate and bit error rate is equally investigated. Following that, experimental studies comparing both the relative impact of turbulence induced scattering on coherent and non-coherent light propagating through water and the relative impact of turbulence in different water conditions are presented. It is shown that the scintillation index increases with increasing temperature inhomogeneity in the underwater channel. These results indicate that a light beam from a non-coherent source has a greater resilience to temperature inhomogeneity induced turbulence effect in an underwater channel. These results will help researchers in simulating realistic channel conditions when modelling a light emitting diode (LED) based intensity modulation with direct detection (IM/DD) UOWC link. Finally, a comparison of different modulation schemes in still and turbulent water conditions is presented. Using an underwater channel emulator, it is shown that pulse position modulation (PPM) and subcarrier intensity modulation (SIM) have an inherent resilience to turbulence induced fading with SIM achieving higher data rates under all conditions. The signal processing technique termed pair-wise coding (PWC) is applied to SIM in underwater optical wireless communications for the first time. The performance of PWC is compared with the, state-of-the-art, bit and power loading optimisation algorithm. Using PWC, a maximum data rate of 5.2 Gbps is achieved in still water conditions

    Cost-effective non-destructive testing of biomedical components fabricated using additive manufacturing

    Get PDF
    Biocompatible titanium-alloys can be used to fabricate patient-specific medical components using additive manufacturing (AM). These novel components have the potential to improve clinical outcomes in various medical scenarios. However, AM introduces stability and repeatability concerns, which are potential roadblocks for its widespread use in the medical sector. Micro-CT imaging for non-destructive testing (NDT) is an effective solution for post-manufacturing quality control of these components. Unfortunately, current micro-CT NDT scanners require expensive infrastructure and hardware, which translates into prohibitively expensive routine NDT. Furthermore, the limited dynamic-range of these scanners can cause severe image artifacts that may compromise the diagnostic value of the non-destructive test. Finally, the cone-beam geometry of these scanners makes them susceptible to the adverse effects of scattered radiation, which is another source of artifacts in micro-CT imaging. In this work, we describe the design, fabrication, and implementation of a dedicated, cost-effective micro-CT scanner for NDT of AM-fabricated biomedical components. Our scanner reduces the limitations of costly image-based NDT by optimizing the scanner\u27s geometry and the image acquisition hardware (i.e., X-ray source and detector). Additionally, we describe two novel techniques to reduce image artifacts caused by photon-starvation and scatter radiation in cone-beam micro-CT imaging. Our cost-effective scanner was designed to match the image requirements of medium-size titanium-alloy medical components. We optimized the image acquisition hardware by using an 80 kVp low-cost portable X-ray unit and developing a low-cost lens-coupled X-ray detector. Image artifacts caused by photon-starvation were reduced by implementing dual-exposure high-dynamic-range radiography. For scatter mitigation, we describe the design, manufacturing, and testing of a large-area, highly-focused, two-dimensional, anti-scatter grid. Our results demonstrate that cost-effective NDT using low-cost equipment is feasible for medium-sized, titanium-alloy, AM-fabricated medical components. Our proposed high-dynamic-range strategy improved by 37% the penetration capabilities of an 80 kVp micro-CT imaging system for a total x-ray path length of 19.8 mm. Finally, our novel anti-scatter grid provided a 65% improvement in CT number accuracy and a 48% improvement in low-contrast visualization. Our proposed cost-effective scanner and artifact reduction strategies have the potential to improve patient care by accelerating the widespread use of patient-specific, bio-compatible, AM-manufactured, medical components

    Interference mitigation in LiFi networks

    Get PDF
    Due to the increasing demand for wireless data, the radio frequency (RF) spectrum has become a very limited resource. Alternative approaches are under investigation to support the future growth in data traffic and next-generation high-speed wireless communication systems. Techniques such as massive multiple-input multiple-output (MIMO), millimeter wave (mmWave) communications and light-fidelity (LiFi) are being explored. Among these technologies, LiFi is a novel bi-directional, high-speed and fully networked wireless communication technology. However, inter-cell interference (ICI) can significantly restrict the system performance of LiFi attocell networks. This thesis focuses on interference mitigation in LiFi attocell networks. The angle diversity receiver (ADR) is one solution to address the issue of ICI as well as frequency reuse in LiFi attocell networks. With the property of high concentration gain and narrow field of view (FOV), the ADR is very beneficial for interference mitigation. However, the optimum structure of the ADR has not been investigated. This motivates us to propose the optimum structures for the ADRs in order to fully exploit the performance gain. The impact of random device orientation and diffuse link signal propagation are taken into consideration. The performance comparison between the select best combining (SBC) and maximum ratio combining (MRC) is carried out under different noise levels. In addition, the double source (DS) system, where each LiFi access point (AP) consists of two sources transmitting the same information signals but with opposite polarity, is proven to outperform the single source (SS) system under certain conditions. Then, to overcome issues around ICI, random device orientation and link blockage, hybrid LiFi/WiFi networks (HLWNs) are considered. In this thesis, dynamic load balancing (LB) considering handover in HLWNs is studied. The orientation-based random waypoint (ORWP) mobility model is considered to provide a more realistic framework to evaluate the performance of HLWNs. Based on the low-pass filtering effect of the LiFi channel, we firstly propose an orthogonal frequency division multiple access (OFDMA)-based resource allocation (RA) method in LiFi systems. Also, an enhanced evolutionary game theory (EGT)-based LB scheme with handover in HLWNs is proposed. Finally, due to the characteristic of high directivity and narrow beams, a vertical-cavity surface-emitting laser (VCSEL) array transmission system has been proposed to mitigate ICI. In order to support mobile users, two beam activation methods are proposed. The beam activation based on the corner-cube retroreflector (CCR) can achieve low power consumption and almost-zero delay, allowing real-time beam activation for high-speed users. The mechanism based on the omnidirectional transmitter (ODTx) is suitable for low-speed users and very robust to random orientation
    • 

    corecore