2,794 research outputs found

    The State of the Art in Cartograms

    Full text link
    Cartograms combine statistical and geographical information in thematic maps, where areas of geographical regions (e.g., countries, states) are scaled in proportion to some statistic (e.g., population, income). Cartograms make it possible to gain insight into patterns and trends in the world around us and have been very popular visualizations for geo-referenced data for over a century. This work surveys cartogram research in visualization, cartography and geometry, covering a broad spectrum of different cartogram types: from the traditional rectangular and table cartograms, to Dorling and diffusion cartograms. A particular focus is the study of the major cartogram dimensions: statistical accuracy, geographical accuracy, and topological accuracy. We review the history of cartograms, describe the algorithms for generating them, and consider task taxonomies. We also review quantitative and qualitative evaluations, and we use these to arrive at design guidelines and research challenges

    Deconstructing the Ferraris maps (1770-1778) : a study of the map production process and its implications for geometric accuracy

    Get PDF
    In the 18th century, what is now Belgium formed part of the Habsburg Empire as the Austrian Netherlands. Between 1770 and 1774 this territory was subjected to a large-scale military survey, carried out by the artillery corps of the Austrian Netherlands under the command of its director-general, count de Ferraris. By the end of 1777, this exercise had resulted in two maps: the manuscript Carte de cabinet (1:11,520) and the printed Carte marchande (1:86,400). The importance of the Ferraris maps as cartographic heritage is undeniable. Improved digital access to the maps in recent years in combination with enhanced computation, visualisation and spatial querying capabilities are now offering new ways to study the historical information contained in the maps, be that their geographical content or the techniques used to gather and display it. A thorough reappraisal of the maps themselves and the way they were made therefore seemed justified. Consequently, this dissertation aims to dismantle the maps to reveal their individual components, be that input from field surveys, existing geodetic data or other maps, and to gain insight into which external factors influenced how all this potential input was combined to form the end products, that is, the maps. This is done by studying archival sources and by performing analyses on the maps themselves to, among other things, determine their geometric accuracy

    Urban and regional planning models and GIS

    Get PDF

    Measuring and simulating haemodynamics due to geometric changes in facial expression

    Get PDF
    The human brain has evolved to be very adept at recognising imperfections in human skin. In particular, observing someone’s facial skin appearance is important in recognising when someone is ill, or when finding a suitable mate. It is therefore a key goal of computer graphics research to produce highly realistic renderings of skin. However, the optical processes that give rise to skin appearance are complex and subtle. To address this, computer graphics research has incorporated more and more sophisticated models of skin reflectance. These models are generally based on static concentrations of skin chromophores; melanin and haemoglobin. However, haemoglobin concentrations are far from static, as blood flow is directly caused by both changes in facial expression and emotional state. In this thesis, we explore how blood flow changes as a consequence of changing facial expression with the aim of producing more accurate models of skin appearance. To build an accurate model of blood flow, we base it on real-world measurements of blood concentrations over time. We describe, in detail, the steps required to obtain blood concentrations from photographs of a subject. These steps are then used to measure blood concentration maps for a series of expressions that define a wide gamut of human expression. From this, we define a blending algorithm that allows us to interpolate these maps to generate concentrations for other expressions. This technique, however, requires specialist equipment to capture the maps in the first place. We try to rectify this problem by investigating a direct link between changes in facial geometry and haemoglobin concentrations. This requires building a unique capture device that captures both simultaneously. Our analysis hints a direct linear connection between the two, paving the way for further investigatio

    Semi-automated geomorphological mapping applied to landslide hazard analysis

    Get PDF
    Computer-assisted three-dimensional (3D) mapping using stereo and multi-image (“softcopy”) photogrammetry is shown to enhance the visual interpretation of geomorphology in steep terrain with the direct benefit of greater locational accuracy than traditional manual mapping. This would benefit multi-parameter correlations between terrain attributes and landslide distribution in both direct and indirect forms of landslide hazard assessment. Case studies involve synthetic models of a landslide, and field studies of a rock slope and steep undeveloped hillsides with both recently formed and partly degraded, old landslide scars. Diagnostic 3D morphology was generated semi-automatically both using a terrain-following cursor under stereo-viewing and from high resolution digital elevation models created using area-based image correlation, further processed with curvature algorithms. Laboratory-based studies quantify limitations of area-based image correlation for measurement of 3D points on planar surfaces with varying camera orientations. The accuracy of point measurement is shown to be non-linear with limiting conditions created by both narrow and wide camera angles and moderate obliquity of the target plane. Analysis of the results with the planar surface highlighted problems with the controlling parameters of the area-based image correlation process when used for generating DEMs from images obtained with a low-cost digital camera. Although the specific cause of the phase-wrapped image artefacts identified was not found, the procedure would form a suitable method for testing image correlation software, as these artefacts may not be obvious in DEMs of non-planar surfaces.Modelling of synthetic landslides shows that Fast Fourier Transforms are an efficient method for removing noise, as produced by errors in measurement of individual DEM points, enabling diagnostic morphological terrain elements to be extracted. Component landforms within landslides are complex entities and conversion of the automatically-defined morphology into geomorphology was only achieved with manual interpretation; however, this interpretation was facilitated by softcopy-driven stereo viewing of the morphological entities across the hillsides.In the final case study of a large landslide within a man-made slope, landslide displacements were measured using a photogrammetric model consisting of 79 images captured with a helicopter-borne, hand-held, small format digital camera. Displacement vectors and a thematic geomorphological map were superimposed over an animated, 3D photo-textured model to aid non-stereo visualisation and communication of results

    Segmentation of neuroanatomy in magnetic resonance images

    Get PDF
    Segmentation in neurological Magnetic Resonance Imaging (MRI) is necessary for volume measurement, feature extraction and for the three-dimensional display of neuroanatomy. This thesis proposes several automated and semi-automated methods which offer considerable advantages over manual methods because of their lack of subjectivity, their data reduction capabilities, and the time savings they give. Work has concentrated on the use of dual echo multi-slice spin-echo data sets in order to take advantage of the intrinsically multi-parametric nature of MRI. Such data is widely acquired clinically and segmentation therefore does not require additional scans. The literature has been reviewed. Factors affecting image non-uniformity for a modem 1.5 Tesla imager have been investigated. These investigations demonstrate that a robust, fast, automatic three-dimensional non-uniformity correction may be applied to data as a pre-processing step. The merit of using an anisotropic smoothing method for noisy data has been demonstrated. Several approaches to neurological MRI segmentation have been developed. Edge-based processing is used to identify the skin (the major outer contour) and the eyes. Edge-focusing, two threshold based techniques and a fast radial CSF identification approach are proposed to identify the intracranial region contour in each slice of the data set. Once isolated, the intracranial region is further processed to identify CSF, and, depending upon the MRI pulse sequence used, the brain itself may be sub-divided into grey matter and white matter using semiautomatic contrast enhancement and clustering methods. The segmentation of Multiple Sclerosis (MS) plaques has also been considered. The utility of the stack, a data driven multi-resolution approach to segmentation, has been investigated, and several improvements to the method suggested. The factors affecting the intrinsic accuracy of neurological volume measurement in MRI have been studied and their magnitudes determined for spin-echo imaging. Geometric distortion - both object dependent and object independent - has been considered, as well as slice warp, slice profile, slice position and the partial volume effect. Finally, the accuracy of the approaches to segmentation developed in this thesis have been evaluated. Intracranial volume measurements are within 5% of expert observers' measurements, white matter volumes within 10%, and CSF volumes consistently lower than the expert observers' measurements due to the observers' inability to take the partial volume effect into account

    2D and 3D computer vision analysis of gaze, gender and age

    Get PDF
    Human-Computer Interaction (HCI) has been an active research area for over four decades. Research studies and commercial designs in this area have been largely facilitated by the visual modality which brings diversified functionality and improved usability to HCI interfaces by employing various computer vision techniques. This thesis explores a number of facial cues, such as gender, age and gaze, by performing 2D and 3D based computer vision analysis. The ultimate aim is to create a natural HCI strategy that can fulfil user expectations, augment user satisfaction and enrich user experience by understanding user characteristics and behaviours. To this end, salient features have been extracted and analysed from 2D and 3D face representations; 3D reconstruction algorithms and their compatible real-world imaging systems have been investigated; case study HCI systems have been designed to demonstrate the reliability, robustness, and applicability of the proposed method.More specifically, an unsupervised approach has been proposed to localise eye centres in images and videos accurately and efficiently. This is achieved by utilisation of two types of geometric features and eye models, complemented by an iris radius constraint and a selective oriented gradient filter specifically tailored to this modular scheme. This approach resolves challenges such as interfering facial edges, undesirable illumination conditions, head poses, and the presence of facial accessories and makeup. Tested on 3 publicly available databases (the BioID database, the GI4E database and the extended Yale Face Database b), and a self-collected database, this method outperforms all the methods in comparison and thus proves to be highly accurate and robust. Based on this approach, a gaze gesture recognition algorithm has been designed to increase the interactivity of HCI systems by encoding eye saccades into a communication channel similar to the role of hand gestures. As well as analysing eye/gaze data that represent user behaviours and reveal user intentions, this thesis also investigates the automatic recognition of user demographics such as gender and age. The Fisher Vector encoding algorithm is employed to construct visual vocabularies as salient features for gender and age classification. Algorithm evaluations on three publicly available databases (the FERET database, the LFW database and the FRCVv2 database) demonstrate the superior performance of the proposed method in both laboratory and unconstrained environments. In order to achieve enhanced robustness, a two-source photometric stereo method has been introduced to recover surface normals such that more invariant 3D facia features become available that can further boost classification accuracy and robustness. A 2D+3D imaging system has been designed for construction of a self-collected dataset including 2D and 3D facial data. Experiments show that utilisation of 3D facial features can increase gender classification rate by up to 6% (based on the self-collected dataset), and can increase age classification rate by up to 12% (based on the Photoface database). Finally, two case study HCI systems, a gaze gesture based map browser and a directed advertising billboard, have been designed by adopting all the proposed algorithms as well as the fully compatible imaging system. Benefits from the proposed algorithms naturally ensure that the case study systems can possess high robustness to head pose variation and illumination variation; and can achieve excellent real-time performance. Overall, the proposed HCI strategy enabled by reliably recognised facial cues can serve to spawn a wide array of innovative systems and to bring HCI to a more natural and intelligent state

    G-CSC Report 2010

    Get PDF
    The present report gives a short summary of the research of the Goethe Center for Scientific Computing (G-CSC) of the Goethe University Frankfurt. G-CSC aims at developing and applying methods and tools for modelling and numerical simulation of problems from empirical science and technology. In particular, fast solvers for partial differential equations (i.e. pde) such as robust, parallel, and adaptive multigrid methods and numerical methods for stochastic differential equations are developed. These methods are highly adanvced and allow to solve complex problems.. The G-CSC is organised in departments and interdisciplinary research groups. Departments are localised directly at the G-CSC, while the task of interdisciplinary research groups is to bridge disciplines and to bring scientists form different departments together. Currently, G-CSC consists of the department Simulation and Modelling and the interdisciplinary research group Computational Finance
    • …
    corecore