4 research outputs found

    Predicting and Visualising City Noise Levels to Support Tinnitus Sufferers

    Get PDF
    On a daily basis, urban residents are unconsciously exposed to hazardous noise levels. This has a detrimental effect on the ear-drum, with symptoms often not apparent till later in life. The impact of harmful noises levels has a damaging impact on wellbeing. It is estimated that 10 million people suffer from damaged hearing in the UK alone, with 6.4million of retirement age or above. With this number expected to increase significantly by 2031, the demand and cost for healthcare providers is expected to intensify. Tinnitus affects about 10 percent of the UK population, with the condition ranging from mild to severe. The effects can have psychological impact on the patient. Often communication becomes difficult, and the sufferer may also be unable to use a hearing aid due to buzzing, ringing or monotonous sounds in the ear. Action on Hearing Loss states that sufferers of hearing related illnesses are more likely to withdraw from social activities. Tinnitus sufferers are known to avoid noisy environments and busy urban areas, as exposure to excessive noise levels exacerbates the symptoms. In this paper, an approach for evaluating and predicting urban noise levels is put forward. The system performs a data classification process to identify and predict harmful noise areas at diverse periods. The goal is to provide Tinnitus sufferers with a real-time tool, which can be used as a guide to find quieter routes to work; identify harmful areas to avoid or predict when noise levels on certain roads will be dangerous to the ear-drum. Our system also performs a visualisation function, which overlays real-time noise levels onto an interactive 3D map. Keywords: Hazardous Noise Levels, Data Classification, Tinnitus, Visualisation, Hearing Loss, Prediction, Real-Tim

    Generating a Novel Scene-Graph for a Modern GIS Rendering Framework

    Get PDF
    Within this paper we discuss and present a novel modern 3D Geographical Information System (GIS) framework Project-Vision-Support (PVS). The framework is capable of processing large amounts of geo-spatial data to procedurally extract, extrapolate, and infer properties to create realistic real-world 3D virtual urban environments. The paper focuses on the generation of a novel scene-graph structure used in a number of algorithms and novel procedures for the increased rendering speeds of large virtual scenes and the increased processing capabilities as well as ease of use to manipulate a worlds worth of data. The scene-graph structure, made of two sections, depicts the spatial boundaries of the UKs Ordnance Survey (OS) scheme down to 1km2. Each 1km2 node contains the second section of the scene-graph structure, generated from the OpenStreetMap (OSM) classifications; involving buildings, highways, amenities, boundaries, and terrain. Leaf nodes contain the model mesh data. Generation of the spatial scene-graph for the UK takes 7.99 seconds for 6,313,150 nodes. The scene-graph structure allows for fast dispersal of render states, as well as scene manipulation by pre-categorising the data into branches of the scene-graph structure. Searching a node by name is evaluated using depth-first-search and breadth-first-search giving 0.000186 and 0.036914 seconds respectively within a scene-graph of 3257 nodes

    Predicting and visualising city noise levels to support tinnitus sufferers

    No full text
    On a daily basis, urban residents are unconsciously exposed to hazardous noise levels. This has a detrimental effect on the ear-drum, with symptoms often not apparent till later in life. The impact of harmful noises levels has a damaging impact on wellbeing. It is estimated that 10 million people suffer from damaged hearing in the UK alone, with 6.4 million of retirement age or above. With this number expected to increase significantly by 2031, the demand and cost for healthcare providers is expected to intensify. Tinnitus affects about 10 percent of the UK population, with the condition ranging from mild to severe. The effects can have psychological impact on the patient. Often communication becomes difficult, and the sufferer may also be unable to use a hearing aid due to buzzing, ringing or monotonous sounds in the ear. Action on Hearing Loss states that sufferers of hearing related illnesses are more likely to withdraw from social activities. Tinnitus sufferers are known to avoid noisy environments and busy urban areas, as exposure to excessive noise levels exacerbates the symptoms. In this paper, an approach for evaluating and predicting urban noise levels is put forward. The system performs a data classification process to identify and predict harmful noise areas at diverse periods. The goal is to provide Tinnitus sufferers with a real-time tool, which can be used as a guide to find quieter routes to work; identify harmful areas to avoid or predict when noise levels on certain roads will be dangerous to the ear-drum. Our system also performs a visualisation function, which overlays real-time noise levels onto an interactive 3D map

    Contributions to Big Geospatial Data Rendering and Visualisations

    Get PDF
    Current geographical information systems lack features and components which are commonly found within rendering and game engines. When combined with computer game technologies, a modern geographical information system capable of advanced rendering and data visualisations are achievable. We have investigated the combination of big geospatial data, and computer game engines for the creation of a modern geographical information system framework capable of visualising densely populated real-world scenes using advanced rendering algorithms. The pipeline imports raw geospatial data in the form of Ordnance Survey data which is provided by the UK government, LiDAR data provided by a private company, and the global open mapping project of OpenStreetMap. The data is combined to produce additional terrain data where data is missing from the high resolution data sources of LiDAR by utilising interpolated Ordnance Survey data. Where data is missing from LiDAR, the same interpolation techniques are also utilised. Once a high resolution terrain data set which is complete in regards to coverage, is generated, sub datasets can be extracted from the LiDAR using OSM boundary data as a perimeter. The boundaries of OSM represent buildings or assets. Data can then be extracted such as the heights of buildings. This data can then be used to update the OSM database. Using a novel adjacency matrix extraction technique, 3D model mesh objects can be generated using both LiDAR and OSM information. The generation of model mesh objects created from OSM data utilises procedural content generation techniques, enabling the generation of GIS based 3D real-world scenes. Although only LiDAR and Ordnance Survey for UK data is available, restricting the generation to the UK borders, using OSM alone, the system is able to procedurally generate any place within the world covered by OSM. In this research, to manage the large amounts of data, a novel scenegraph structure has been generated to spatially separate OSM data according to OS coordinates, splitting the UK into 1kilometer squared tiles, and categorising OSM assets such as buildings, highways, amenities. Once spatially organised, and categorised as an asset of importance, the novel scenegraph allows for data dispersal through an entire scene in real-time. The 3D real-world scenes visualised within the runtime simulator can be manipulated in four main aspects; • Viewing at any angle or location through the use of a 3D and 2D camera system. • Modifying the effects or effect parameters applied to the 3D model mesh objects to visualise user defined data by use of our novel algorithms and unique lighting data-structure effect file with accompanying material interface. • Procedurally generating animations which can be applied to the spatial parameters of objects, or the visual properties of objects. • Applying Indexed Array Shader Function and taking advantage of the novel big geospatial scenegraph structure to exploit better rendering techniques in the context of a modern Geographical Information System, which has not been done, to the best of our knowledge. Combined with a novel scenegraph structure layout, the user can view and manipulate real-world procedurally generated worlds with additional user generated content in a number of unique and unseen ways within the current geographical information system implementations. We evaluate multiple functionalities and aspects of the framework. We evaluate the performance of the system, measuring frame rates with multi sized maps by stress testing means, as well as evaluating the benefits of the novel scenegraph structure for categorising, separating, manoeuvring, and data dispersal. Uniform scaling by n2 of scenegraph nodes which contain no model mesh data, procedurally generated model data, and user generated model data. The experiment compared runtime parameters, and memory consumption. We have compared the technical features of the framework against that of real-world related commercial projects; Google Maps, OSM2World, OSM-3D, OSM-Buildings, OpenStreetMap, ArcGIS, Sustainability Assessment Visualisation and Enhancement (SAVE), and Autonomous Learning Agents for Decentralised Data and Information (ALLADIN). We conclude that when compared to related research, the framework produces data-sets relevant for visualising geospatial assets from the combination of real-world data-sets, capable of being used by a multitude of external game engines, applications, and geographical information systems. The ability to manipulate the production of said data-sets at pre-compile time aids processing speeds for runtime simulation. This ability is provided by the pre-processor. The added benefit is to allow users to manipulate the spatial and visual parameters in a number of varying ways with minimal domain knowledge. The features of creating procedural animations attached to each of the spatial parameters and visual shading parameters allow users to view and encode their own representations of scenes which are unavailable within all of the products stated. Each of the alternative projects have similar features, but none which allow full animation ability of all parameters of an asset; spatially or visually, or both. We also evaluated the framework on the implemented features; implementing the needed algorithms and novelties of the framework as problems arose in the development of the framework. Examples of this is the algorithm for combining the multiple terrain data-sets we have (Ordnance Survey terrain data and Light Detection and Ranging Digital Surface Model data and Digital Terrain Model data), and combining them in a justifiable way to produce maps with no missing data values for further analysis and visualisation. A majority of visualisations are rendered using an Indexed Array Shader Function effect file, structured to create a novel design to encapsulate common rendering effects found in commercial computer games, and apply them to the rendering of real-world assets for a modern geographical information system. Maps of various size, in both dimensions, polygonal density, asset counts, and memory consumption prove successful in relation to real-time rendering parameters i.e. the visualisation of maps do not create a bottleneck for processing. The visualised scenes allow users to view large dense environments which include terrain models within procedural and user generated buildings, highways, amenities, and boundaries. The use of a novel scenegraph structure allows for the fast iteration and search from user defined dynamic queries. The interaction with the framework is allowed through a novel Interactive Visualisation Interface. Utilising the interface, a user can apply procedurally generated animations to both spatial and visual properties to any node or model mesh within the scene. We conclude that the framework has been a success. We have completed what we have set out to develop and create, we have combined multiple data-sets to create improved terrain data-sets for further research and development. We have created a framework which combines the real-world data of Ordnance Survey, LiDAR, and OpenStreetMap, and implemented algorithms to create procedural assets of buildings, highways, terrain, amenities, model meshes, and boundaries. for visualisation, with implemented features which allows users to search and manipulate a city’s worth of data on a per-object basis, or user-defined combinations. The successful framework has been built by the cross domain specialism needed for such a project. We have combined the areas of; computer games technology, engine and framework development, procedural generation techniques and algorithms, use of real-world data-sets, geographical information system development, data-parsing, big-data algorithmic reduction techniques, and visualisation using shader techniques
    corecore