77 research outputs found

    Taking the Pixel out of the Picture

    Get PDF

    Spin-scanning Cameras for Planetary Exploration: Imager Analysis and Simulation

    Get PDF
    In this thesis, a novel approach to spaceborne imaging is investigated, building upon the scan imaging technique in which camera motion is used to construct an image. This thesis investigates its use with wide-angle (≥90° field of view) optics mounted on spin stabilised probes for large-coverage imaging of planetary environments, and focusses on two instruments. Firstly, a descent camera concept for a planetary penetrator. The imaging geometry of the instrument is analysed. Image resolution is highest at the penetrator’s nadir and lowest at the horizon, whilst any point on the surface is imaged with highest possible resolution when the camera’s altitude is equal to that point’s radius from nadir. Image simulation is used to demonstrate the camera’s images and investigate analysis techniques. A study of stereophotogrammetric measurement of surface topography using pairs of descent images is conducted. Measurement accuracies and optimum stereo geometries are presented. Secondly, the thesis investigates the EnVisS (Entire Visible Sky) instrument, under development for the Comet Interceptor mission. The camera’s imaging geometry, coverage and exposure times are calculated, and used to model the expected signal and noise in EnVisS observations. It is found that the camera’s images will suffer from low signal, and four methods for mitigating this – binning, coaddition, time-delay integration and repeat sampling – are investigated and described. Use of these methods will be essential if images of sufficient signal are to be acquired, particularly for conducting polarimetry, the performance of which is modelled using Monte Carlo simulation. Methods of simulating planetary cameras’ images are developed to facilitate the study of both cameras. These methods enable the accurate simulation of planetary surfaces and cometary atmospheres, are based on Python libraries commonly used in planetary science, and are intended to be readily modified and expanded for facilitating the study of a variety of planetary cameras

    A metadata-enhanced framework for high performance visual effects

    No full text
    This thesis is devoted to reducing the interactive latency of image processing computations in visual effects. Film and television graphic artists depend upon low-latency feedback to receive a visual response to changes in effect parameters. We tackle latency with a domain-specific optimising compiler which leverages high-level program metadata to guide key computational and memory hierarchy optimisations. This metadata encodes static and dynamic information about data dependence and patterns of memory access in the algorithms constituting a visual effect – features that are typically difficult to extract through program analysis – and presents it to the compiler in an explicit form. By using domain-specific information as a substitute for program analysis, our compiler is able to target a set of complex source-level optimisations that a vendor compiler does not attempt, before passing the optimised source to the vendor compiler for lower-level optimisation. Three key metadata-supported optimisations are presented. The first is an adaptation of space and schedule optimisation – based upon well-known compositions of the loop fusion and array contraction transformations – to the dynamic working sets and schedules of a runtimeparameterised visual effect. This adaptation sidesteps the costly solution of runtime code generation by specialising static parameters in an offline process and exploiting dynamic metadata to adapt the schedule and contracted working sets at runtime to user-tunable parameters. The second optimisation comprises a set of transformations to generate SIMD ISA-augmented source code. Our approach differs from autovectorisation by using static metadata to identify parallelism, in place of data dependence analysis, and runtime metadata to tune the data layout to user-tunable parameters for optimal aligned memory access. The third optimisation comprises a related set of transformations to generate code for SIMT architectures, such as GPUs. Static dependence metadata is exploited to guide large-scale parallelisation for tens of thousands of in-flight threads. Optimal use of the alignment-sensitive, explicitly managed memory hierarchy is achieved by identifying inter-thread and intra-core data sharing opportunities in memory access metadata. A detailed performance analysis of these optimisations is presented for two industrially developed visual effects. In our evaluation we demonstrate up to 8.1x speed-ups on Intel and AMD multicore CPUs and up to 6.6x speed-ups on NVIDIA GPUs over our best hand-written implementations of these two effects. Programmability is enhanced by automating the generation of SIMD and SIMT implementations from a single programmer-managed scalar representation

    Semi-automated geomorphological mapping applied to landslide hazard analysis

    Get PDF
    Computer-assisted three-dimensional (3D) mapping using stereo and multi-image (“softcopy”) photogrammetry is shown to enhance the visual interpretation of geomorphology in steep terrain with the direct benefit of greater locational accuracy than traditional manual mapping. This would benefit multi-parameter correlations between terrain attributes and landslide distribution in both direct and indirect forms of landslide hazard assessment. Case studies involve synthetic models of a landslide, and field studies of a rock slope and steep undeveloped hillsides with both recently formed and partly degraded, old landslide scars. Diagnostic 3D morphology was generated semi-automatically both using a terrain-following cursor under stereo-viewing and from high resolution digital elevation models created using area-based image correlation, further processed with curvature algorithms. Laboratory-based studies quantify limitations of area-based image correlation for measurement of 3D points on planar surfaces with varying camera orientations. The accuracy of point measurement is shown to be non-linear with limiting conditions created by both narrow and wide camera angles and moderate obliquity of the target plane. Analysis of the results with the planar surface highlighted problems with the controlling parameters of the area-based image correlation process when used for generating DEMs from images obtained with a low-cost digital camera. Although the specific cause of the phase-wrapped image artefacts identified was not found, the procedure would form a suitable method for testing image correlation software, as these artefacts may not be obvious in DEMs of non-planar surfaces.Modelling of synthetic landslides shows that Fast Fourier Transforms are an efficient method for removing noise, as produced by errors in measurement of individual DEM points, enabling diagnostic morphological terrain elements to be extracted. Component landforms within landslides are complex entities and conversion of the automatically-defined morphology into geomorphology was only achieved with manual interpretation; however, this interpretation was facilitated by softcopy-driven stereo viewing of the morphological entities across the hillsides.In the final case study of a large landslide within a man-made slope, landslide displacements were measured using a photogrammetric model consisting of 79 images captured with a helicopter-borne, hand-held, small format digital camera. Displacement vectors and a thematic geomorphological map were superimposed over an animated, 3D photo-textured model to aid non-stereo visualisation and communication of results

    A PCA approach to the object constancy for faces using view-based models of the face

    Get PDF
    The analysis of object and face recognition by humans attracts a great deal of interest, mainly because of its many applications in various fields, including psychology, security, computer technology, medicine and computer graphics. The aim of this work is to investigate whether a PCA-based mapping approach can offer a new perspective on models of object constancy for faces in human vision. An existing system for facial motion capture and animation developed for performance-driven animation of avatars is adapted, improved and repurposed to study face representation in the context of viewpoint and lighting invariance. The main goal of the thesis is to develop and evaluate a new approach to viewpoint invariance that is view-based and allows mapping of facial variation between different views to construct a multi-view representation of the face. The thesis describes a computer implementation of a model that uses PCA to generate example- based models of the face. The work explores the joint encoding of expression and viewpoint using PCA and the mapping between viewspecific PCA spaces. The simultaneous, synchronised video recording of 6 views of the face was used to construct multi-view representations, which helped to investigate how well multiple views could be recovered from a single view via the content addressable memory property of PCA. A similar approach was taken to lighting invariance. Finally, the possibility of constructing a multi-view representation from asynchronous view-based data was explored. The results of this thesis have implications for a continuing research problem in computer vision – the problem of recognising faces and objects from different perspectives and in different lighting. It also provides a new approach to understanding viewpoint invariance and lighting invariance in human observers

    MAPPA. Methodologies applied to archaeological potential Predictivity

    Get PDF
    The fruitful cooperation over the years between the university teaching staff of Univerità di Pisa (Pisa University), the officials of the Soprintendenza per i Beni Archeologici della Toscana (Superintendency for Archaeological Heritage of Tuscany), the officials of the Soprintendenza per i Beni Architettonici, Paesaggistici, Artistici ed Etnoantropologici per le Province di Pisa e Livorno (Superintendency for Architectural, Landscape and Ethno-anthropological Heritage for the Provinces of Pisa and Livorno), and the Comune di Pisa (Municipality of Pisa) has favoured a great deal of research on issues regarding archaeological heritage and the reconstruction of the environmental and landscape context in which Pisa has evolved throughout the centuries of its history. The desire to merge this remarkable know-how into an organic framework and, above all, to make it easily accessible, not only to the scientific community and professional categories involved, but to everyone, together with the wish to provide Pisa with a Map of archaeological potential (the research, protection and urban planning tool capable of converging the heritage protection needs of the remains of the past with the development requirements of the future) led to the development of the MAPPA project – Methodologies applied to archaeological potential predictivity - funded by Regione Toscana in 2010. The two-year project started on 1 July 2011 and will end on 30 June 2013. The first year of research was dedicated to achieving the first objective, that is, to retrieving the results of archaeological investigations from the archives of Superintendencies and University and from the pages of scientific publications, and to making them easily accessible; these results have often never been published or have often been published incompletely and very slowly. For this reason, a webGIS (“MappaGIS” that may freely accessed at http://mappaproject.arch.unipi.it/?page_id=452) was created and will be followed by a MOD (Mappa Open Data archaeological archive), the first Italian archive of open archaeological data, in line with European directives regarding access to Public Administration data and recently implemented by the Italian government also (the beta version of the archive can be viewed at http://mappaproject.arch.unipi.it/?page_id=454). Details are given in this first volume about the operational decisions that led to the creation of the webGIS: the software used, the system architecture, the organisation of information and its structuring into various information layers. But not only. The creation of the webGIS also gave us the opportunity to focus on a series of considerations alongside the work carried out by the MAPPA Laboratory researchers. We took the decision to publish these considerations with a view to promoting debate within the scientific community and, more in general, within the professional categories involved (e.g. public administrators, university researchers, archaeology professionals). This allowed us to overcome the critical aspects that emerged, such as the need to update the archaeological excavation documentation and data archiving systems in order to adjust them to the new standards provided by IT development; most of all, the need for greater and more rapid spreading of information, without which research cannot truly progress. Indeed, it is by comparing and connecting new data in every possible and, at times, unexpected way that research can truly thrive

    Implementation of computer visualisation in UK planning

    Get PDF
    PhD ThesisWithin the processes of public consultation and development management, planners are required to consider spatial information, appreciate spatial transformations and future scenarios. In the past, conventional media such as maps, plans, illustrations, sections, and physical models have been used. Those traditional visualisations are at a high degree of abstraction, sometimes difficult to understand for lay people and inflexible in terms of the range of scenarios which can be considered. Yet due to technical advances and falling costs, the potential for computer based visualisation has much improved and has been increasingly adopted within the planning process. Despite the growth in this field, insufficient consideration has been given to the possible weakness of computerised visualisations. Reflecting this lack of research, this study critically evaluates the use and potential of computerised visualisation within this process. The research is divided into two components: case study analysis and reflections of the author following his involvement within the design and use of visualisations in a series of planning applications; and in-depth interviews with experienced practitioners in the field. Based on a critical review of existing literature, this research explores in particular the issues of credibility, realism and costs of production. The research findings illustrate the importance of the credibility of visualisations, a topic given insufficient consideration within the academic literature. Whereas the realism of visualisations has been the focus of much previous research, the results of the case studies and interviews with practitioners undertaken in this research suggest a ‘photo’ realistic level of details may not be required as long as the observer considers the visualisations to be a credible reflection of the underlying reality. Although visualisations will always be a simplification of reality and their level of realism is subjective, there is still potential for developing guidelines or protocols for image production based on commonly agreed standards. In the absence of such guidelines there is a danger that scepticism in the credibility of computer visualisations will prevent the approach being used to its full potential. These findings suggest there needs to be a balance between scientific protocols and artistic licence in the production of computer visualisation. In order to be sufficiently credible for use in decision making within the planning processes, the production of computer visualisation needs to follow a clear methodology and scientific protocols set out in good practice guidance published by professional bodies and governmental organisations.Newcastle upon Tyne for awarding me an International Scholarship and Alumni Bursar

    Vector Graphics Animation with Time-Varying Topology

    Get PDF
    International audienceWe introduce the Vector Animation Complex (VAC), a novel data structure for vector graphics animation, designed to support themodeling of time-continuous topological events. This allows features of a connected drawing to merge, split, appear, or disappear atdesired times via keyframes that introduce the desired topological change. Because the resulting space-time complex directly capturesthe time-varying topological structure, features are readily edited in both space and time in a way that reflects the intent of the drawing.A formal description of the data structure is provided, along with topological and geometric invariants. We illustrate our modelingparadigm with experimental results on various examples
    corecore