4 research outputs found
Full-field characterisation of epicardial deformation using three dimensional digital image correlation (3D-DIC)
Imaging of the heart provides valuable insight into its functionality and the progression of diseases affecting the cardiac muscle. Currently, ultrasound 2D speckle tracking
echocardiography (US-2D-STE) is the most frequently used clinical technique to detect and monitor the progression of cardiovascular disease through changes in strain of the cardiac muscle. However, this imaging modality has several limitations, which reduce the accuracy and reproducibility of the measurement. As a result, the complex behaviour of heart deformation including contraction and twisting cannot
be accurately detected by this technique.
This thesis describes the development of an optical method based on 3D digital image correlation (3D-DIC) to enable full-field deformation analysis in the heart. The hypothesis of this project is that 3D measurement of local strain in experimental in vitro and ex vivo models of the heart will provide a detailed characterisation of the behaviour of the heart and provide reference measurements for comparison with clinical imaging modalities.
The experimental method requires a robust stereo optical system ensuring high-quality and synchronised imaging during heart deformation. The developed methodology was validated through multiple experimental and numerical tests in a zero-strain configuration, which provided an estimate of the error in the reconstruction of strain on the cardiac surface (approximately 1%).
Applications in experimental in vitro and ex vivo models of the heart are described. Moreover, a comparison of the performance of 3D-DIC and US-2D-STE under the same conditions of the heart is investigated, demonstrating the superiority of 3D-DIC for dynamic, high-resolution strain measurements (approximately 1.5 mm).
However, being an optical technique, 3D-DIC is limited to only surface measurements on the epicardium and requires an effective speckle pattern to be applied on the heart surface, which may pose biocompatible problems and important challenges in its application in Vivo.
This experimental work has led to the development of a robust tool for localised and detailed measurement of strain at a high temporal and spatial resolution, with
the latter one order of magnitude improved with respect to existing optical techniques.
Interpretation of the full-field results can be used to show the non-uniform and inhomogeneous strain distribution on the epicardial surface and identify changes in strain within ex vivo models of cardiac disease
User-oriented markerless augmented reality framework based on 3D reconstruction and loop closure detection
An augmented reality (AR) system needs to track the user-view to perform an accurate augmentation registration. The present research proposes a conceptual marker-less, natural feature-based AR framework system, the process for which is divided into two stages - an offline database training session for the application developers, and an online AR tracking and display session for the final users. In the offline session, two types of 3D reconstruction application, RGBD-SLAM and SfM are integrated into the development framework for building the reference template of a target environment. The performance and applicable conditions of these two methods are presented in the present thesis, and the application developers can choose which method to apply for their developmental demands. A general developmental user interface is provided to the developer for interaction, including a simple GUI tool for augmentation configuration. The present proposal also applies a Bag of Words strategy to enable a rapid "loop-closure detection" in the online session, for efficiently querying the application user-view from the trained database to locate the user pose. The rendering and display process of augmentation is currently implemented within an OpenGL window, which is one result of the research that is worthy of future detailed investigation and development
Multimodal Navigation for Accurate Space Rendezvous Missions
© Cranfield University 2021. All rights reserved. No part of
this publication may be reproduced without the written
permission of the copyright ownerRelative navigation is paramount in space missions that involve rendezvousing
between two spacecraft. It demands accurate and continuous estimation of the six
degree-of-freedom relative pose, as this stage involves close-proximity-fast-reaction
operations that can last up to five orbits. This has been routinely achieved thanks to
active sensors such as lidar, but their large size, cost, power and limited operational
range remain a stumbling block for en masse on-board integration. With the onset
of faster processing units, lighter and cheaper passive optical sensors are emerging as
the suitable alternative for autonomous rendezvous in combination with computer
vision algorithms. Current vision-based solutions, however, are limited by adverse
illumination conditions such as solar glare, shadowing, and eclipse. These effects are
exacerbated when the target does not hold cooperative markers to accommodate the
estimation process and is incapable of controlling its rotational state.
This thesis explores novel model-based methods that exploit sequences of monoc ular images acquired by an on-board camera to accurately carry out spacecraft
relative pose estimation for non-cooperative close-range rendezvous with a known
artificial target. The proposed solutions tackle the current challenges of imaging in
the visible spectrum and investigate the contribution of the long wavelength infrared
(or “thermal”) band towards a combined multimodal approach.
As part of the research, a visible-thermal synthetic dataset of a rendezvous
approach with the defunct satellite Envisat is generated from the ground up using a
realistic orbital camera simulator. From the rendered trajectories, the performance
of several state-of-the-art feature detectors and descriptors is first evaluated for
both modalities in a tailored scenario for short and wide baseline image processing
transforms. Multiple combinations, including the pairing of algorithms with their
non-native counterparts, are tested. Computational runtimes are assessed in an
embedded hardware board.
From the insight gained, a method to estimate the pose on the visible band is
derived from minimising geometric constraints between online local point and edge
contour features matched to keyframes generated offline from a 3D model of the
target. The combination of both feature types is demonstrated to achieve a pose
solution for a tumbling target using a sparse set of training images, bypassing the
need for hardware-accelerated real-time renderings of the model.
The proposed algorithm is then augmented with an extended Kalman filter
which processes each feature-induced minimisation output as individual pseudo measurements, fusing them to estimate the relative pose and velocity states at
each time-step. Both the minimisation and filtering are established using Lie group
formalisms, allowing for the covariance of the solution computed by the former to be automatically incorporated as measurement noise in the latter, providing
an automatic weighing of each feature type directly related to the quality of the
matches. The predicted states are then used to search for new feature matches in the
subsequent time-step. Furthermore, a method to derive a coarse viewpoint estimate
to initialise the nominal algorithm is developed based on probabilistic modelling of
the target’s shape. The robustness of the complete approach is demonstrated for
several synthetic and laboratory test cases involving two types of target undergoing
extreme illumination conditions.
Lastly, an innovative deep learning-based framework is developed by processing
the features extracted by a convolutional front-end with long short-term memory cells,
thus proposing the first deep recurrent convolutional neural network for spacecraft
pose estimation. The framework is used to compare the performance achieved by
visible-only and multimodal input sequences, where the addition of the thermal band
is shown to greatly improve the performance during sunlit sequences. Potential
limitations of this modality are also identified, such as when the target’s thermal
signature is comparable to Earth’s during eclipse.PH