156 research outputs found

    Modelling the induced magnetic signature of naval vessels

    Get PDF
    In the construction of naval vessels stealth is an important design feature. With recent advances in electromagnetic sensor technology the war time threat to shipping posed by electromagnetically triggered mines is becoming more significant and consequently the need to understand, predict and reduce the electromagnetic signature of ships is growing. There are a number of components to the electromagnetic field surrounding a ship, with each component originating from different physical processes. The work presented in this study is concerned with the magnetic signature resulting from the magnetisation of the ferromagnetic material of the ship, under the influence of the earth's magnetic field. The detection threat arising from this induced magnetic signature has been known for many years, and consequently, warships are generally fitted with degaussing coils which aim to generate a masking field to counteract this signature. In this work computational models are developed to enable the induced magnetic signature and the effects of degaussing coils to be studied. The models are intended to provide a tool set, to aid the electromagnetic signature analyst in ensuring that pre-production designs of a vessel lie within specified induced magnetic signature targets. Techniques presented where also allow the rapid calculation of currents in degaussing coils. This is necessary because the induced magnetisation of a vessel changes with orientation. Three models are presented within this work. The first model represents a ship as a simple geometric shape, a prolate spheroidal shell, of a given relative permeability. Analytical expressions are derived which characterise the magnetic perturbation to a previously uniform magnetic field, the earth's magnetic field, when the spheroid is placed within its influence. These results provide a quantitative insight into the shielding of large internal magnetic sources by the hull. This model is intended for use in preliminary design studies. A second model is described which is based on the finite element method. This is a numerical model which has the capability of accurately reproducing the relatively complex geometry of a ship and of including the effects of degaussing coils. For these reasons this model is intended for detailed quantitative studies of the induced magnetic signature. A method is described to calculate the optimal set of degaussing coil currents required to minimise the induced magnetic signature. The induced signature without and with degaussing is presented. For the successful application of the finite element method the generation of a mesh is of extreme importance. In this work a mesh generation procedure is described which permits meshes to be generated around a collection of planar surfaces. The relatively complex geometry of a ship can be easily specified as a number of planar surfaces and from this, the finite element mesh can be automatically generated. The automatic mesh generation detailed in this work eliminates an otherwise labour intensive step in the analysis procedure. These techniques are sufficiently powerful to allow meaningful calculations for real ships to be performed on desk-top computers of modest power. An example is presented which highlights the application of this model to a hypothetical ship structure. The third model detailed is specifically designed to study the induced magnetic signature of mine countermeasures vessels. Here the induced magnetic signature is no longer dominated by the gross structure of the ship, which is constructed from non-magnetic materials, but arises from the combined effect of the individual items of machinery onboard the craft

    Field D* pathfinding in weighted simplicial complexes

    Get PDF
    Includes abstract.Includes bibliographical references.The development of algorithms to efficiently determine an optimal path through a complex environment is a continuing area of research within Computer Science. When such environments can be represented as a graph, established graph search algorithms, such as Dijkstra’s shortest path and A*, can be used. However, many environments are constructed from a set of regions that do not conform to a discrete graph. The Weighted Region Problem was proposed to address the problem of finding the shortest path through a set of such regions, weighted with values representing the cost of traversing the region. Robust solutions to this problem are computationally expensive since finding shortest paths across a region requires expensive minimisation. Sampling approaches construct graphs by introducing extra points on region edges and connecting them with edges criss-crossing the region. Dijkstra or A* are then applied to compute shortest paths. The connectivity of these graphs is high and such techniques are thus not particularly well suited to environments where the weights and representation frequently change. The Field D* algorithm, by contrast, computes the shortest path across a grid of weighted square cells and has replanning capabilites that cater for environmental changes. However, representing an environment as a weighted grid (an image) is not space-efficient since high resolution is required to produce accurate paths through areas containing features sensitive to noise. In this work, we extend Field D* to weighted simplicial complexes – specifically – triangulations in 2D and tetrahedral meshes in 3D

    A high performance 3D exact euclidean distance transform algorithm for distributed computing

    Get PDF
    The Euclidean distance transform (EDT) is used in various methods in pattern recognition, computer vision, image analysis, physics, applied mathematics and robotics. Until now, several sequential EDT algorithms have been described in the literature, however they are time- and memory-consuming for images with large resolutions. Therefore, parallel implementations of the EDT are required specially for 3D images. This paper presents a parallel implementation based on domain decomposition of a well-known 3D Euclidean distance transform algorithm, and analyzes its performance on a cluster of workstations. The use of a data compression tool to reduce communication time is investigated and discussed. Among the obtained performance results, this work shows that data compression is an essential tool for clusters with low-bandwidth networks.CNP

    Mathematical techniques for shape modelling in computer graphics: A distance-based approach.

    Get PDF
    This research is concerned with shape modelling in computer graphics. The dissertation provides a review of the main research topics and developments in shape modelling and discusses current visualisation techniques required for the display of the models produced. In computer graphics surfaces are normally defined using analytic functions. Geometry however, supplies many shapes without providing their analytic descriptions. These are defined implicitly through fundamental relationships between primitive geometrical objects. Transferring this approach in computer graphics, opens new directions in shape modelling by enabling the definition of new objects or supplying a rigorous alternative to analytical definitions of objects with complex analytical descriptions. We review, in this dissertation, relevant works in the area of implicit modelling. Based on our observations on the shortcomings of these works, we develop an implicit modelling approach which draws on a seminal technique in this area: the distance based object definition. We investigate the principles, potential and applications of this technique both in conceptual terms (modelling aspects) and on technical merit (visualisation issues). This is the context of this PhD research. The conceptual and technological frameworks developed are presented in terms of a comprehensive investigation of an object's constituent primitives and modelling constraints on the one hand, and software visualisation platforms on the other. Finally, we adopt a critical perspective of our work to discuss possible directions for further improvements and exploitation for the modelling approach we have developed

    Geological parameterisation of petroleum reservoir models for improved uncertainty quantification

    Get PDF
    As uncertainty can never be removed from reservoir forecasts, the accurate quantification of uncertainty is the only appropriate method to make reservoir predictions. Bayes’ Theorem defines a framework by which the uncertainty in a reservoir can be ascertained by updating prior definitions of uncertainty with the mismatch between our simulation models and the measured production data. In the simplest version of the Bayesian methodology we assume that a realistic representation our field exists as a particular combination of model parameters from a set of uniform prior ranges. All models are believed to be initially equally likely, but are updated to new values of uncertainty based on the misfit between the historical and production data. Furthermore, most effort in reservoir uncertainty quantification and automated history matching has been applied to non-geological model parameters, preferring to leave the geological aspects of the reservoir static. While such an approach is the easiest to apply, the reality is that the majority of the reservoir uncertainty is sourced from the geological aspects of the reservoir, therefore geological parameters should be included in the prior and those priors should be conditioned to include the full amount of geological knowledge so as to remove combinations that are not possible in nature. This thesis develops methods of geological parameterisation to capture geological features and assess the impact of geologically derived non-uniform prior definitions and the choice of modelling method/interpretation on the quantification of uncertainty. A number of case studies are developed, using synthetic models and a real field data set, that show the inclusion of geological prior data reduces the amount of quantified uncertainty and improves the performance of sampling. The framework allows the inclusion of any data type, to reflect the variety of geological information sources. ii Errors in the interpretation of the geology and/or the choice of an appropriate modelling method have an impact on the quantified uncertainty. In the cases developed in this thesis all models were able to produce good history matches, but the differences in the models lead to differences in the amount of quantified uncertainty. The result is that each quantification would lead to different development decisions and that the a combination of several models may be required when a single modelling approach cannot be defined. The overall conclusion to the work is that geological prior data should be used in uncertainty quantification to reduce the uncertainty in forecasts by preventing bias from non-realistic models

    Optical flow estimation via steered-L1 norm

    Get PDF
    Global variational methods for estimating optical flow are among the best performing methods due to the subpixel accuracy and the ‘fill-in’ effect they provide. The fill-in effect allows optical flow displacements to be estimated even in low and untextured areas of the image. The estimation of such displacements are induced by the smoothness term. The L1 norm provides a robust regularisation term for the optical flow energy function with a very good performance for edge-preserving. However this norm suffers from several issues, among these is the isotropic nature of this norm which reduces the fill-in effect and eventually the accuracy of estimation in areas near motion boundaries. In this paper we propose an enhancement to the L1 norm that improves the fill-in effect for this smoothness term. In order to do this we analyse the structure tensor matrix and use its eigenvectors to steer the smoothness term into components that are ‘orthogonal to’ and ‘aligned with’ image structures. This is done in primal-dual formulation. Results show a reduced end-point error and improved accuracy compared to the conventional L1 norm

    Optical flow estimation via steered-L1 norm

    Get PDF
    Global variational methods for estimating optical flow are among the best performing methods due to the subpixel accuracy and the ‘fill-in’ effect they provide. The fill-in effect allows optical flow displacements to be estimated even in low and untextured areas of the image. The estimation of such displacements are induced by the smoothness term. The L1 norm provides a robust regularisation term for the optical flow energy function with a very good performance for edge-preserving. However this norm suffers from several issues, among these is the isotropic nature of this norm which reduces the fill-in effect and eventually the accuracy of estimation in areas near motion boundaries. In this paper we propose an enhancement to the L1 norm that improves the fill-in effect for this smoothness term. In order to do this we analyse the structure tensor matrix and use its eigenvectors to steer the smoothness term into components that are ‘orthogonal to’ and ‘aligned with’ image structures. This is done in primal-dual formulation. Results show a reduced end-point error and improved accuracy compared to the conventional L1 norm

    Field D* Pathfinding in Weighted Simplicial Complexes

    Get PDF
    The development of algorithms to efficiently determine an optimal path through a complex environment is a continuing area of research within Computer Science. When such environments can be represented as a graph, established graph search algorithms, such as Dijkstra’s shortest path and A*, can be used. However, many environments are constructed from a set of regions that do not conform to a discrete graph. The Weighted Region Problem was proposed to address the problem of finding the shortest path through a set of such regions, weighted with values representing the cost of traversing the region. Robust solutions to this problem are computationally expensive since finding shortest paths across a region requires expensive minimisation. Sampling approaches construct graphs by introducing extra points on region edges and connecting them with edges criss-crossing the region. Dijkstra or A* are then applied to compute shortest paths. The connectivity of these graphs is high and such techniques are thus not particularly well suited to environments where the weights and representation frequently change. The Field D* algorithm, by contrast, computes the shortest path across a grid of weighted square cells and has replanning capabilites that cater for environmental changes. However, representing an environment as a weighted grid (an image) is not space-efficient since high resolution is required to produce accurate paths through areas containing features sensitive to noise. In this work, we extend Field D* to weighted simplicial complexes – specifically – triangulations in 2D and tetrahedral meshes in 3D. Such representations offer benefits in terms of space over a weighted grid, since fewer triangles can represent polygonal objects with greater accuracy than a large number of grid cells. By exploiting these savings, we show that Triangulated Field D* can produce an equivalent path cost to grid-based Multi-resolution Field D*, using up to an order of magnitude fewer triangles over grid cells and visiting an order of magnitude fewer nodes. Finally, as a practical demonstration of the utility of our formulation, we show how Field D* can be used to approximate a distance field on the nodes of a simplicial complex, and how this distance field can be used to weight the simplicial complex to produce contour-following behaviour by shortest paths computed with Field D*
    corecore