337 research outputs found

    Novel computational technique for determining depth using the Bees Algorithm and blind image deconvolution

    Get PDF
    In the past decade the Scanning Electron Microscope (SEM) has taken on a significant role in the micro-nano imaging field. A number of researchers have been developing computational techniques for determining depth from SEM images. Depth from Automatic Focusing (DFAF) is one of the most popular depth computation techniques used for SEM. However, images captured with SEM may be distorted and suffer from problems of misalignment due to internal and external factors such as interaction between electron beam and surface of sample, lens aberrations, environmental noise and artefacts on the sample. Distortion and misalignment cause computational errors in the depth determination process. Image correction is required to reduce those errors. In this study the proposed image correction procedure is based on Phase Correlation and Log-Polar Transformation (PCLPT), which has been extensively used as a preprocessing stage for many image processing operations. The computation process of PCLPT covers the pixel level interpolation process but it cannot deal with sub-pixel level interpolation errors. Hence, an image filtering stage is necessary to reduce the error. This enhanced PCLPT was also utilised as a pre-processing step for DFAF which is the first contribution of this research. Although DFAF is a simple technique, it was found that the computation involved becomes more complex with image correction. Thus, the priority to develop a less complicated and more robust depth computation technique for SEM is needed. This study proposes an optimised Blind Image Deconvolution BID) technique using the Bees Algorithm for determining depth. The Bees Algorithm (BA) is a swarm-based optimisation technique which mimics the foraging behaviour of honey bees. The algorithm combines exploitative neighbourhood search with explorative global search to enable effective location of the globally optimal solution to a problem. The BA has been applied to several optimisation problems including mechanical design, job shop scheduling and robot path planning. Due to its promise as an effective global optimisation tool,the BA has been chosen for this work. The second contribution of the research consists of two improvements which have been implemented to enhance the BA. The first improvement focuses on an adaptive approach to neighbourhood size changes. The second consists of two main steps. The first step is to define a measurement technique to determine the direction along which promising solutions can be found. This is based on the steepness angle mimicking the direction along which a scout bee performs its figure-of-eight waggle dance during the recruitment of forager bees. The second step is to develop a hybrid algorithm combining BA and a Hill Climbing Algorithm (HCA) based on the threshold value of the steepness angle. The final contribution of this study is to develop a novel technique based on the BA for optimising the blurriness parameter with BID for determining depth. The techniques proposed in this study have enabled depth information in SEM images to be determined with 68.23 % average accuracy

    Combining strong and weak lensing estimates in the Cosmos field

    Full text link
    We present a combined cosmic shear analysis of the modeling of line-of-sight distortions on strongly lensed extended arcs and galaxy shape measurements in the COSMOS field. We develop a framework to predict the covariance of strong lensing and galaxy shape measurements of cosmic shear on the basis of the small scale matter power-spectrum. The weak lensing measurement is performed using data from the COSMOS survey calibrated with a cloning scheme using the Ultra Fast Image Generator UFig (Berge 2013). The strong lensing analysis is performed by forward modeling the lensing arcs with a main lensing deflector and external shear components from the same Hubble Space Telescope imaging data set. With a sample of three strong lensing shear measurements we present a 2-sigma detection of the cross-correlation signal between the two complementary measurements of cosmic shear along the identical line of sight. With large samples of lenses available with the next generation ground and space based observatories, the covariance of the signal of the two probes with large samples of lenses allows for systematic checks, cross-calibration of either of the two measurement and the measurement of the small scale shear power-spectrum.Comment: 27 pages, 7 figures, 4 table

    Extracting field hockey player coordinates using a single wide-angle camera

    Get PDF
    In elite level sport, coaches are always trying to develop tactics to better their opposition. In a team sport such as field hockey, a coach must consider both the strengths and weaknesses of both their own team and that of the opposition to develop an effective tactic. Previous work has shown that spatiotemporal coordinates of the players are a good indicator of team performance, yet the manual extraction of player coordinates is a laborious process that is impractical for a performance analyst. Subsequently, the key motivation of this work was to use a single camera to capture two-dimensional position information for all players on a field hockey pitch. The study developed an algorithm to automatically extract the coordinates of the players on a field hockey pitch using a single wide-angle camera. This is a non-trivial problem that requires: 1. Segmentation and classification of a set of players that are relatively small compared to the image size, and 2. Transformation from image coordinates to world coordinates, considering the effects of the lens distortion due to the wide-angle lens. Subsequently the algorithm addressed these two points in two sub-algorithms: Player Feature Extraction and Reconstruct World Points. Player Feature Extraction used background subtraction to segment player blob candidates in the frame. 61% of blobs in the dataset were correctly segmented, while a further 15% were over-segmented. Subsequently a Convolutional Neural Network was trained to classify the contents of blobs. The classification accuracy on the test set was 85.9%. This was used to eliminate non-player blobs and reform over-segmented blobs. The Reconstruct World Points sub-algorithm transformed the image coordinates into world coordinates. To do so the intrinsic and extrinsic parameters were estimated using planar camera calibration. Traditionally the extrinsic parameters are optimised by minimising the projection error of a set of control points; it was shown that this calibration method is sub-optimal due to the extreme camera pose. Instead the extrinsic parameters were estimated by minimising the world reconstruction error. For a 1:100 scale model the median reconstruction error was 0.0043 m and the distribution of errors had an interquartile range of 0.0025 m. The Acceptable Error Rate, the percentage of points that were reconstructed with less than 0.005 m of error, was found to be 63.5%. The overall accuracy of the algorithm was assessed using the precision and the recall. It found that players could be extracted within 1 m of their ground truth coordinates with a precision of 75% and a recall of 66%. This is a respective improvement of 20% and 16% improvement on the state-of-the-art. However it also found that the likelihood of extraction decreases the further a player is from the camera, reducing to close to zero in parts of the pitch furthest from the camera. These results suggest that the developed algorithm is unsuitable to identify player coordinates in the extreme regions of a full field hockey pitch; however this limitation may be overcome by using multiple collocated cameras focussed on different regions of the pitch. Equally, the algorithm is sport agnostic, so could be used in a sport that uses a smaller pitch

    Data efficiency in imitation learning with a focus on object manipulation

    Get PDF
    Imitation is a natural human behaviour that helps us learn new skills. Modelling this behaviour in robots, however, has many challenges. This thesis investigates the challenge of handling the expert demonstrations in an efficient way, so as to minimise the number of demonstrations required for robots to learn. To achieve this, it focuses on demonstration data efficiency at various steps of the imitation process. Specifically, it presents new methodologies that offer ways to acquire, augment and combine demonstrations in order to improve the overall imitation process. Firstly, the thesis explores an inexpensive and non-intrusive way of acquiring dexterous human demonstrations. Human hand actions are quite complex, especially when they involve object manipulation. The proposed framework tackles this by using a camera to capture the hand information and then retargeting it to a dexterous hand model. It does this by combining inverse kinematics with stochastic optimisation. The demonstrations collected with this framework can then be used in the imitation process. Secondly, the thesis presents a novel way to apply data augmentation to demonstrations. The main difficulty of augmenting demonstrations is that their trajectorial nature can make them unsuccessful. Whilst previous works require additional knowledge about the task or demonstrations to achieve this, this method performs augmentation automatically. To do this, it introduces a correction network that corrects the augmentations based on the distribution of the original experts. Lastly, the thesis investigates data efficiency in a multi-task scenario where it additionally proposes a data combination method. Its aim is to automatically divide a set of tasks into sub-behaviours. Contrary to previous works, it does this without any additional knowledge about the tasks. To achieve this, it uses both task-specific and shareable modules. This minimises negative transfer and allows for the method to be applied to various task sets with different commonalities.Open Acces

    A study of the application of adaptive optics (AO) in optical coherence tomography (OCT) and confocal microscopy for the purpose of high resolution imaging

    Get PDF
    A problem is presented when imaging the eye in that optical aberrations are introduced by tissues of the anterior eye such as the cornea and lens. Adaptive optics (AO) and scanning laser ophthalmoscopy (SLO) have been combined to detect and compensate for these aberrations through the use of one or more correcting devices. Di erent corrector options exist, such as a liquid crystal lens or a deformable mirror (DM), such as that used in this thesis. This study seeks to use the ability of the DM to add focus/defocus aberrations to the closed loop AO system. This procedure could allow for dynamic focus control during generation of B-scan images using spectral domain optical coherence tomography (SD-OCT), where typically this is only possible using slower time domain techniques. The confocal gate scanning is controlled using the focus altering aberrations created by changing the shape of the deformable mirror. Using the novel master-slave interferometry method, multiple live en-face images can be acquired simultaneously. In this thesis, application of this method to an AO system is presented whereby en-face images may be acquired at multiple depths simultaneously. As an extension to this research, an OCT despeckle method is demonstrated. Further to this work is the investigation of the role in AO for optimisation of optical systems without the requirement for direct aberration measurement. Towards this end, genetic algorithms (GA) may be employed to control the DM in an iterative process to improve the coupling of light into fibre

    Robust vision based slope estimation and rocks detection for autonomous space landers

    Get PDF
    As future robotic surface exploration missions to other planets, moons and asteroids become more ambitious in their science goals, there is a rapidly growing need to significantly enhance the capabilities of entry, descent and landing technology such that landings can be carried out with pin-point accuracy at previously inaccessible sites of high scientific value. As a consequence of the extreme uncertainty in touch-down locations of current missions and the absence of any effective hazard detection and avoidance capabilities, mission designers must exercise extreme caution when selecting candidate landing sites. The entire landing uncertainty footprint must be placed completely within a region of relatively flat and hazard free terrain in order to minimise the risk of mission ending damage to the spacecraft at touchdown. Consequently, vast numbers of scientifically rich landing sites must be rejected in favour of safer alternatives that may not offer the same level of scientific opportunity. The majority of truly scientifically interesting locations on planetary surfaces are rarely found in such hazard free and easily accessible locations, and so goals have been set for a number of advanced capabilities of future entry, descent and landing technology. Key amongst these is the ability to reliably detect and safely avoid all mission critical surface hazards in the area surrounding a pre-selected landing location. This thesis investigates techniques for the use of a single camera system as the primary sensor in the preliminary development of a hazard detection system that is capable of supporting pin-point landing operations for next generation robotic planetary landing craft. The requirements for such a system have been stated as the ability to detect slopes greater than 5 degrees and surface objects greater than 30cm in diameter. The primary contribution in this thesis, aimed at achieving these goals, is the development of a feature-based,self-initialising, fully adaptive structure from motion (SFM) algorithm based on a robust square-root unscented Kalman filtering framework and the fusion of the resulting SFM scene structure estimates with a sophisticated shape from shading (SFS) algorithm that has the potential to produce very dense and highly accurate digital elevation models (DEMs) that possess sufficient resolution to achieve the sensing accuracy required by next generation landers. Such a system is capable of adapting to potential changes in the external noise environment that may result from intermittent and varying rocket motor thrust and/or sudden turbulence during descent, which may translate to variations in the vibrations experienced by the platform and introduce varying levels of motion blur that will affect the accuracy of image feature tracking algorithms. Accurate scene structure estimates have been obtained using this system from both real and synthetic descent imagery, allowing for the production of accurate DEMs. While some further work would be required in order to produce DEMs that possess the resolution and accuracy needed to determine slopes and the presence of small objects such as rocks at the levels of accuracy required, this thesis presents a very strong foundation upon which to build and goes a long way towards developing a highly robust and accurate solution

    Real-time performance-focused on localisation techniques for autonomous vehicle: a review

    Get PDF

    Resilient Infrastructure and Building Security

    Get PDF

    Artificial Intelligence Applications for Drones Navigation in GPS-denied or degraded Environments

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    • …
    corecore