38 research outputs found

    Semantic location extraction from crowdsourced data

    Get PDF
    Crowdsourced Data (CSD) has recently received increased attention in many application areas including disaster management. Convenience of production and use, data currency and abundancy are some of the key reasons for attracting this high interest. Conversely, quality issues like incompleteness, credibility and relevancy prevent the direct use of such data in important applications like disaster management. Moreover, location information availability of CSD is problematic as it remains very low in many crowd sourced platforms such as Twitter. Also, this recorded location is mostly related to the mobile device or user location and often does not represent the event location. In CSD, event location is discussed descriptively in the comments in addition to the recorded location (which is generated by means of mobile device's GPS or mobile communication network). This study attempts to semantically extract the CSD location information with the help of an ontological Gazetteer and other available resources. 2011 Queensland flood tweets and Ushahidi Crowd Map data were semantically analysed to extract the location information with the support of Queensland Gazetteer which is converted to an ontological gazetteer and a global gazetteer. Some preliminary results show that the use of ontologies and semantics can improve the accuracy of place name identification of CSD and the process of location information extraction

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity

    Urban Informatics

    Get PDF
    This open access book is the first to systematically introduce the principles of urban informatics and its application to every aspect of the city that involves its functioning, control, management, and future planning. It introduces new models and tools being developed to understand and implement these technologies that enable cities to function more efficiently – to become ‘smart’ and ‘sustainable’. The smart city has quickly emerged as computers have become ever smaller to the point where they can be embedded into the very fabric of the city, as well as being central to new ways in which the population can communicate and act. When cities are wired in this way, they have the potential to become sentient and responsive, generating massive streams of ‘big’ data in real time as well as providing immense opportunities for extracting new forms of urban data through crowdsourcing. This book offers a comprehensive review of the methods that form the core of urban informatics from various kinds of urban remote sensing to new approaches to machine learning and statistical modelling. It provides a detailed technical introduction to the wide array of tools information scientists need to develop the key urban analytics that are fundamental to learning about the smart city, and it outlines ways in which these tools can be used to inform design and policy so that cities can become more efficient with a greater concern for environment and equity

    Book of short Abstracts of the 11th International Symposium on Digital Earth

    Get PDF
    The Booklet is a collection of accepted short abstracts of the ISDE11 Symposium

    Estimating Solar Energy Production in Urban Areas for Electric Vehicles

    Get PDF
    Cities have a high potential for solar energy from PVs installed on buildings\u27 rooftops. There is an increased demand for solar energy in cities to reduce the negative effect of climate change. The thesis investigates solar energy potential in urban areas. It tries to determine how to detect and identify available rooftop areas, how to calculate suitable ones after excluding the effects of the shade, and the estimated energy generated from PVs. Geographic Information Sciences (GIS) and Remote Sensing (RS) are used in solar city planning. The goal of this research is to assess available and suitable rooftops areas using different GIS and RS techniques for installing PVs and estimating solar energy production for a sample of six compounds in New Cairo, and explore how to map urban areas on the city scale. In this research, the study area is the new Cairo city which has a high potential for harvesting solar energy, buildings in each compound have the same height, which does not cast shade on other buildings affecting PV efficiency. When applying GIS and RS techniques in New Cairo city, it is found that environmental factors - such as bare soil - affect the accuracy of the result, which reached 67% on the city scale. Researching more minor scales, such as compounds, required Very High Resolution (VHR) satellite images with a spatial resolution of up to 0.5 meter. The RS techniques applied in this research included supervised classification, and feature extraction, on Pleiades-1b VHR. On the compound scale, the accuracy assessment for the samples ranged between 74.6% and 96.875%. Estimating the PV energy production requires solar data; which was collected using a weather station and a pyrometer at the American University in Cairo, which is typical of the neighboring compounds in the new Cairo region. It took three years to collect the solar incidence data. The Hay- Devis, Klucher, and Reindl (HDKR) model is then employed to extrapolate the solar radiation measured on horizontal surfaces ÎČ =0°, to that on tilted surfaces with inclination angles ÎČ =10°, 20°, 30° and 45°. The calculated (with help of GIS and Solar radiation models) net rooftop area available for capturing solar radiation was determined for sample New Cairo compounds . The available rooftop areas were subject to the restriction that all the PVs would be coplanar, none of the PVs would protrude outside the rooftop boundaries, and no shading of PVs would occur at any time of the year; moreover typical other rooftop occupied areas, and actual dimensions of typical roof top PVs were taken into consideration. From those calculations, both the realistic total annual Electrical energy produced by the PVs and their daily monthly energy produced are deduced. The former is relevant if the PVs are tied to a grid, whereas the other is more relevant if it is not; optimization is different for both. Results were extended to estimate the total number of cars that may be driven off PV converted solar radiation per home, for different scenarios

    GEOBIA 2016 : Solutions and Synergies., 14-16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC): open access e-book

    Get PDF

    Merging digital surface models sourced from multi-satellite imagery and their consequent application in automating 3D building modelling

    Get PDF
    Recently, especially within the last two decades, the demand for DSMs (Digital Surface Models) and 3D city models has increased dramatically. This has arisen due to the emergence of new applications beyond construction or analysis and consequently to a focus on accuracy and the cost. This thesis addresses two linked subjects: first improving the quality of the DSM by merging different source DSMs using a Bayesian approach; and second, extracting building footprints using approaches, including Bayesian approaches, and producing 3D models. Regarding the first topic, a probabilistic model has been generated based on the Bayesian approach in order to merge different source DSMs from different sensors. The Bayesian approach is specified to be ideal in the case when the data is limited and this can consequently be compensated by introducing the a priori. The implemented prior is based on the hypothesis that the building roof outlines are specified to be smooth, for that reason local entropy has been implemented in order to infer the a priori data. In addition to the a priori estimation, the quality of the DSMs is obtained by using field checkpoints from differential GNSS. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the Maximum Likelihood model which showed similar quantitative statistical results and better qualitative results. Perhaps it is worth mentioning that, although the DSMs used in the merging have been produced using satellite images, the model can be applied on any type of DSM. The second topic is building footprint extraction based on using satellite imagery. An efficient flow-line for automatic building footprint extraction and 3D model construction, from both stereo panchromatic and multispectral satellite imagery was developed. This flow-line has been applied in an area of different building types, with both hipped and sloped roofs. The flow line consisted of multi stages. First, data preparation, digital orthoimagery and DSMs are created from WorldView-1. Pleiades imagery is used to create a vegetation mask. The orthoimagery then undergoes binary classification into ‘foreground’ (including buildings, shadows, open-water, roads and trees) and ‘background’ (including grass, bare soil, and clay). From the foreground class, shadows and open water are removed after creating a shadow mask by thresholding the same orthoimagery. Likewise roads have been removed, for the time being, after interactively creating a mask using the orthoimagery. NDVI processing of the Pleiades imagery has been used to create a mask for removing the trees. An ‘edge map’ is produced using Canny edge detection to define the exact building boundary outlines, from enhanced orthoimagery. A normalised digital surface model (nDSM) is produced from the original DSM using smoothing and subtracting techniques. Second, start Building Detection and Extraction. Buildings can be detected, in part, in the nDSM as isolated relatively elevated ‘blobs’. These nDSM ‘blobs’ are uniquely labelled to identify rudimentary buildings. Each ‘blob’ is paired with its corresponding ‘foreground’ area from the orthoimagery. Each ‘foreground’ area is used as an initial building boundary, which is then vectorised and simplified. Some unnecessary details in the ‘edge map’, particularly on the roofs of the buildings can be removed using mathematical morphology. Some building edges are not detected in the ‘edge map’ due to low contrast in some parts of the orthoimagery. The ‘edge map’ is subsequently further improved also using mathematical morphology, leading to the ‘modified edge map’. Finally, A Bayesian approach is used to find the most probable coordinates of the building footprints, based on the ‘modified edge map’. The proposal that is made for the footprint a priori data is based on the creating a PDF which assumes that the probable footprint angle at the corner is 90o and along the edge is 180o, with a less probable value given to the other angles such as 45o and 135o. The 3D model is constructed by extracting the elevation of the buildings from the DSM and combining it with the regularized building boundary. Validation, both quantitatively and qualitatively has shown that the developed process and associated algorithms have successfully been able to extract building footprints and create 3D models

    Spatiotemporal enabled Content-based Image Retrieval

    Full text link

    The Analysis of Open Source Software and Data for Establishment of GIS Services Throughout the Network in a Mapping Organization at National or International Level

    Get PDF
    Federal agencies and their partners collect and manage large amounts of geospatial data but it is often not easily found when needed, and sometimes data is collected or purchased multiple times. In short, the best government data is not always organized and managed efficiently to support decision making in a timely and cost effective manner. National mapping agencies, various Departments responsible for collection of different types of Geospatial data and their authorities cannot, for very long, continue to operate, as they did a few years ago like people living in an island. Leaders need to look at what is now possible that was not possible before, considering capabilities such as cloud computing, crowd sourced data collection, available Open source remotely sensed data and multi source information vital in decision-making as well as new Web-accessible services that provide, sometimes at no cost. Many of these services previously could be obtained only from local GIS experts. These authorities need to consider the available solution and gather information about new capabilities, reconsider agency missions and goals, review and revise policies, make budget and human resource for decisions, and evaluate new products, cloud services, and cloud service providers. To do so, we need, choosing the right tools to rich the above-mentioned goals. As we know, Data collection is the most cost effective part of the mapping and establishment of a Geographic Information system. However, it is not only because of the cost for the data collection task but also because of the damages caused by the delay and the time that takes to provide the user with proper information necessary for making decision from the field up to the user’s hand. In fact, the time consumption of a project for data collection, processing, and presentation of geospatial information has more effect on the cost of a bigger project such as disaster management, construction, city planning, environment, etc. Of course, with such a pre-assumption that we provide all the necessary information from the existing sources directed to user’s computer. The best description for a good GIS project optimization or improvement is finding a methodology to reduce the time and cost, and increase data and service quality (meaning; Accuracy, updateness, completeness, consistency, suitability, information content, integrity, integration capability, and fitness for use as well as user’s specific needs and conditions that must be addressed with a special attention). Every one of the above-mentioned issues must be addressed individually and at the same time, the whole solution must be provided in a global manner considering all the criteria. In this thesis at first, we will discuss about the problem we are facing and what is needed to be done as establishment of National Spatial Data Infra-Structure (NSDI), the definition and related components. Then after, we will be looking for available Open Source Software solutions to cover the whole process to manage; Data collection, Data base management system, data processing and finally data services and presentation. The first distinction among Software is whether they are, Open source and free or commercial and proprietary. It is important to note that in order to make distinction among softwares it is necessary to define a clear specification for this categorization. It is somehow very difficult to distinguish what software belongs to which class from legal point of view and therefore, makes it necessary to clarify what is meant by various terms. With reference to this concept there are 2 global distinctions then, inside each group, we distinguish another classification regarding their functionalities and applications they are made for in GIScience. According to the outcome of the second chapter, which is the technical process for selection of suitable and reliable software according to the characteristics of the users need and required components, we will come to next chapter. In chapter 3, we elaborate in to the details of the GeoNode software as our best candidate tools to take responsibilities of those issues stated before. In Chapter 4, we will discuss the existing Open Source Data globally available with the predefined data quality criteria (Such as theme, data content, scale, licensing, and coverage) according to the metadata statement inside the datasets by mean of bibliographic review, technical documentation and web search engines. We will discuss in chapter 5 further data quality concepts and consequently define sets of protocol for evaluation of all datasets according to the tasks that a mapping organization in general, needed to be responsible to the probable users in different disciplines such as; Reconnaissance, City Planning, Topographic mapping, Transportation, Environment control, disaster management and etc
 In Chapter 6, all the data quality assessment and protocols will be implemented into the pre-filtered, proposed datasets. In the final scores and ranking result, each datasets will have a value corresponding to their quality according to the sets of rules that are defined in previous chapter. In last steps, there will be a vector of weight that is derived from the questions that has to be answered by user with reference to the project in hand in order to finalize the most appropriate selection of Free and Open Source Data. This Data quality preference has to be defined by identifying a set of weight vector, and then they have to be applied to the quality matrix in order to get a final quality scores and ranking. At the end of this chapter there will be a section presenting data sets utilization in various projects such as “ Early Impact Analysis” as well as “Extreme Rainfall Detection System (ERDS)- version 2” performed by ITHACA. Finally, in conclusion, the important criteria, as well as future trend in GIS software are discussed and at the end recommendations will be presented
    corecore