223 research outputs found

    Characterisation and State Estimation of Magnetic Soft Continuum Robots

    Get PDF
    Minimally invasive surgery has become more popular as it leads to less bleeding, scarring, pain, and shorter recovery time. However, this has come with counter-intuitive devices and steep surgeon learning curves. Magnetically actuated Soft Continuum Robots (SCR) have the potential to replace these devices, providing high dexterity together with the ability to conform to complex environments and safe human interactions without the cognitive burden for the clinician. Despite considerable progress in the past decade in their development, several challenges still plague SCR hindering their full realisation. This thesis aims at improving magnetically actuated SCR by addressing some of these challenges, such as material characterisation and modelling, and sensing feedback and localisation. Material characterisation for SCR is essential for understanding their behaviour and designing effective modelling and simulation strategies. In this work, the material properties of commonly employed materials in magnetically actuated SCR, such as elastic modulus, hyper-elastic model parameters, and magnetic moment were determined. Additionally, the effect these parameters have on modelling and simulating these devices was investigated. Due to the nature of magnetic actuation, localisation is of utmost importance to ensure accurate control and delivery of functionality. As such, two localisation strategies for magnetically actuated SCR were developed, one capable of estimating the full 6 degrees of freedom (DOFs) pose without any prior pose information, and another capable of accurately tracking the full 6-DOFs in real-time with positional errors lower than 4~mm. These will contribute to the development of autonomous navigation and closed-loop control of magnetically actuated SCR

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    Laser absorption spectroscopic tomography with a customised spatial resolution for combustion diagnosis

    Get PDF
    Combustion is a widely used energy conversion technology. However, post-combustion gas emissions have adverse effects on climate change. To address the urgent need for carbon neutrality, efforts are being made to develop cleaner fuels and improve combustion efficiency. Accurate in situ measurements of temperature and species concentration are crucial for analysing and diagnosing the combustion process. In industrial applications, probed-based measurement methods are commonly used to detect temperature and species concentration in the combustion, favoured by their simplicity. However, the probe-based techniques are limited in their spatial resolution, as only point-wise measurements can be provided by them. Additionally, their principle often restricts their temporal resolution, which limits their ability to capture the dynamics of the combustion process. To overcome these limitations, researchers are actively working on developing rapid and multi-dimensional in situ techniques for temperature and species concentration monitoring. Laser Absorption Spectroscopy (LAS) has gained significant attention for its non-intrusive nature and fast response in combustion diagnostics. LAS techniques use an emitter-receiver configuration to measure the line-of-sight light intensity absorbed by species in the gaseous medium. By collecting multiple line-of-sight measurements from different angles, LAS enables tomographic measurement of the combustion process. However, implementations of LAS tomography face challenges due to the physical dimensions of the emitter and receiver and the optical access to industrial combustors. These limitations lead to incomplete measurements, which are key factors of ill-posed problems and artefacts in the reconstructed images. The artefacts lead to inaccuracy and unreliability of the diagnostic results. Increasing physical sampling density is one of the most straightforward ways to alleviate the ill-posed problem caused by inadequate line-of-sight measurements. Improvements in sensors have been demonstrated in previous research, such as optimising laser beam arrangement and reducing the spacing of neighbouring laser beams. In this work, a novel design of a miniature and modular sensor is firstly introduced. It reduces the beam spacing between adjacent laser beams, allowing for a more precise and detailed reconstruction of temperature and species concentration distributions. Meanwhile, modular design allows for customisation and adaptation to various measurement requirements. This flexibility in deployment reduces the cost of the LAS technique. The application of small beam spacing in characterising the non-uniformity of the combustion process has also been demonstrated in this thesis. A multi-channel LAS sensor is developed and applied to exhaust measurements of a commercial auxiliary power unit. The results show that the small beam spacing allows a detailed understanding of the exhaust plume at the mixing zone between the exhaust gas and surrounding air. This spatial information can be used to improve the accuracy of temperature and species concentration measurements. On the other hand, prior knowledge, such as smoothness and sparsity of the measurement target and beam arrangement of the LAS tomographic sensor is used to provide extra physical information to the ill-posed inverse problem. To incorporate the beam arrangement information into the reconstruction process, a new meshing scheme is proposed in this thesis. The scheme dynamically allocates smaller meshes in the beam-dense regions and coarser meshes in the beam-loose regions. This adaptive meshing scheme ensures a finer resolution in detailing the combustion zone where the beams are closely spaced while maintaining the integrity of the physical model by using less resolved reconstruction in the bypass flows or regions where the beams are further apart. As a result, the proposed meshing scheme improves the reconstruction accuracy of the combustion zone. Overall, this PhD project designed and developed LAS tomographic sensors and methods that enable accurate and fast measurement of gas temperature and species concentration in combustion processes with a customised spatial resolution. The main contributions of this thesis include the design and prototyping of a miniature and modular optical sensor for flexible LAS tomography; the development of a multi-channel LAS sensor for simultaneously monitoring exhaust gas temperature and water vapour concentration in gas turbine engines; and the development of a size-adaptive hybrid meshing scheme to improve the reconstruction of target flow fields

    Towards trustworthy computing on untrustworthy hardware

    Get PDF
    Historically, hardware was thought to be inherently secure and trusted due to its obscurity and the isolated nature of its design and manufacturing. In the last two decades, however, hardware trust and security have emerged as pressing issues. Modern day hardware is surrounded by threats manifested mainly in undesired modifications by untrusted parties in its supply chain, unauthorized and pirated selling, injected faults, and system and microarchitectural level attacks. These threats, if realized, are expected to push hardware to abnormal and unexpected behaviour causing real-life damage and significantly undermining our trust in the electronic and computing systems we use in our daily lives and in safety critical applications. A large number of detective and preventive countermeasures have been proposed in literature. It is a fact, however, that our knowledge of potential consequences to real-life threats to hardware trust is lacking given the limited number of real-life reports and the plethora of ways in which hardware trust could be undermined. With this in mind, run-time monitoring of hardware combined with active mitigation of attacks, referred to as trustworthy computing on untrustworthy hardware, is proposed as the last line of defence. This last line of defence allows us to face the issue of live hardware mistrust rather than turning a blind eye to it or being helpless once it occurs. This thesis proposes three different frameworks towards trustworthy computing on untrustworthy hardware. The presented frameworks are adaptable to different applications, independent of the design of the monitored elements, based on autonomous security elements, and are computationally lightweight. The first framework is concerned with explicit violations and breaches of trust at run-time, with an untrustworthy on-chip communication interconnect presented as a potential offender. The framework is based on the guiding principles of component guarding, data tagging, and event verification. The second framework targets hardware elements with inherently variable and unpredictable operational latency and proposes a machine-learning based characterization of these latencies to infer undesired latency extensions or denial of service attacks. The framework is implemented on a DDR3 DRAM after showing its vulnerability to obscured latency extension attacks. The third framework studies the possibility of the deployment of untrustworthy hardware elements in the analog front end, and the consequent integrity issues that might arise at the analog-digital boundary of system on chips. The framework uses machine learning methods and the unique temporal and arithmetic features of signals at this boundary to monitor their integrity and assess their trust level

    InP membrane photonics for large-scale integration

    Get PDF

    InP membrane photonics for large-scale integration

    Get PDF

    Intelligent Beam Steering for Wireless Communication Using Programmable Metasurfaces

    Get PDF
    Reconfigurable Intelligent Surfaces (RIS) are well established as a promising solution to the blockage problem in millimeter-wave (mm-wave) and terahertz (THz) communications, envisioned to serve demanding networking applications, such as 6G and vehicular. HyperSurfaces (HSF) is a revolutionary enabling technology for RIS, complementing Software Defined Metasurfaces (SDM) with an embedded network of controllers to enhance intelligence and autonomous operation in wireless networks. In this work, we consider feedback-based autonomous reconfiguration of the HSF controller states to establish a reliable communication channel between a transmitter and a receiver via programmable reflection on the HSF when Line-of-sight (LoS) between them is absent. The problem is to regulate the angle of reflection on the metasurface such that the power at the receiver is maximized. Extremum Seeking Control (ESC) is employed with the control signals generated mapped into appropriate metasurface coding signals which are communicated to the controllers via the embedded controller network (CN). This information dissemination process incurs delays which can compromise the stability of the feedback system and are thus accounted for in the performance evaluation. Extensive simulation results demonstrate the effectiveness of the proposed method to maximize the power at the receiver within a reasonable time even when the latter is mobile. The spatiotemporal nature of the traffic for different sampling periods is also characterized

    Adaptive Intelligent Systems for Extreme Environments

    Get PDF
    As embedded processors become powerful, a growing number of embedded systems equipped with artificial intelligence (AI) algorithms have been used in radiation environments to perform routine tasks to reduce radiation risk for human workers. On the one hand, because of the low price, commercial-off-the-shelf devices and components are becoming increasingly popular to make such tasks more affordable. Meanwhile, it also presents new challenges to improve radiation tolerance, the capability to conduct multiple AI tasks and deliver the power efficiency of the embedded systems in harsh environments. There are three aspects of research work that have been completed in this thesis: 1) a fast simulation method for analysis of single event effect (SEE) in integrated circuits, 2) a self-refresh scheme to detect and correct bit-flips in random access memory (RAM), and 3) a hardware AI system with dynamic hardware accelerators and AI models for increasing flexibility and efficiency. The variances of the physical parameters in practical implementation, such as the nature of the particle, linear energy transfer and circuit characteristics, may have a large impact on the final simulation accuracy, which will significantly increase the complexity and cost in the workflow of the transistor level simulation for large-scale circuits. It makes it difficult to conduct SEE simulations for large-scale circuits. Therefore, in the first research work, a new SEE simulation scheme is proposed, to offer a fast and cost-efficient method to evaluate and compare the performance of large-scale circuits which subject to the effects of radiation particles. The advantages of transistor and hardware description language (HDL) simulations are combined here to produce accurate SEE digital error models for rapid error analysis in large-scale circuits. Under the proposed scheme, time-consuming back-end steps are skipped. The SEE analysis for large-scale circuits can be completed in just few hours. In high-radiation environments, bit-flips in RAMs can not only occur but may also be accumulated. However, the typical error mitigation methods can not handle high error rates with low hardware costs. In the second work, an adaptive scheme combined with correcting codes and refreshing techniques is proposed, to correct errors and mitigate error accumulation in extreme radiation environments. This scheme is proposed to continuously refresh the data in RAMs so that errors can not be accumulated. Furthermore, because the proposed design can share the same ports with the user module without changing the timing sequence, it thus can be easily applied to the system where the hardware modules are designed with fixed reading and writing latency. It is a challenge to implement intelligent systems with constrained hardware resources. In the third work, an adaptive hardware resource management system for multiple AI tasks in harsh environments was designed. Inspired by the “refreshing” concept in the second work, we utilise a key feature of FPGAs, partial reconfiguration, to improve the reliability and efficiency of the AI system. More importantly, this feature provides the capability to manage the hardware resources for deep learning acceleration. In the proposed design, the on-chip hardware resources are dynamically managed to improve the flexibility, performance and power efficiency of deep learning inference systems. The deep learning units provided by Xilinx are used to perform multiple AI tasks simultaneously, and the experiments show significant improvements in power efficiency for a wide range of scenarios with different workloads. To further improve the performance of the system, the concept of reconfiguration was further extended. As a result, an adaptive DL software framework was designed. This framework can provide a significant level of adaptability support for various deep learning algorithms on an FPGA-based edge computing platform. To meet the specific accuracy and latency requirements derived from the running applications and operating environments, the platform may dynamically update hardware and software (e.g., processing pipelines) to achieve better cost, power, and processing efficiency compared to the static system

    1-D broadside-radiating leaky-wave antenna based on a numerically synthesized impedance surface

    Get PDF
    A newly-developed deterministic numerical technique for the automated design of metasurface antennas is applied here for the first time to the design of a 1-D printed Leaky-Wave Antenna (LWA) for broadside radiation. The surface impedance synthesis process does not require any a priori knowledge on the impedance pattern, and starts from a mask constraint on the desired far-field and practical bounds on the unit cell impedance values. The designed reactance surface for broadside radiation exhibits a non conventional patterning; this highlights the merit of using an automated design process for a design well known to be challenging for analytical methods. The antenna is physically implemented with an array of metal strips with varying gap widths and simulation results show very good agreement with the predicted performance
    • 

    corecore