11 research outputs found

    Proceedings of Workshop on New developments in Space Syntax software

    Get PDF

    From the axial line to the walked line: Evaluating the utility of commercial and user-generated street network datasets in space syntax analysis

    Get PDF
    Data availability, reliability and cost are some of the most constraining factors in space syntax analysis and wider commercial acceptance. In recent years user-created Volunteered Geographic Information (VGI) that is free to all via the Internet has gained wider acceptance and proven reliability (Haklay, 2010). Furthermore it has the property of being created by the people who inhabit the spaces being mapped; therefore it captures local knowledge and detail to a far greater degree than commercial mapping agencies. From a space syntax perspective it also relates more closely to the pedestrian network as it is used on foot and captures details of pedestrian routes through the urban fabric that other road-centric data sources ignore. This paper demonstrates the methodological approaches and analytic outcomes of a space syntax sensitivity analysis of Open Street Map (OSM) VGI road network data, the UK national mapping agency Ordnance Survey Integrated Transport Network (ITN) road data and a hand-drawn Axial map for four areas within the Greater London Region. The space syntax segment analysis was completed within the Depthmap application. The segment analysis was completed on the ITN model, OSM model and hand-drawn model separately and then it was carried out on a combined model of the ITN and OSM that integrated all the network elements from both. The integration and comparison of the network models was carried out through the usage of a new algorithm currently under development at University College London that identifies and extracts the differences between two line network datasets (Koukoletsos, forthcoming) and standard GIS processing techniques. The space syntax measures were evaluated on four areas in outer London that are the focus of the Adaptable Suburbs project at the Bartlett School of Graduate Studies. The analysis was carried out using length-weighted angular segment and choice analysis at radii 800m, 2000m and n (Turner, 2007). Comparative statistics were then generated for the areas to evaluate the analysis outcomes of the different network models. The London-wide network that was created through the combination of the OSM and ITN networks had a total length of 32,000km representing an increase of approximately 20% over the Ordnance Survey ITN network. The dramatic increase in network length alone demonstrates the divergent realities of the two mapping techniques and the representation of the world that they capture. It is anticipated that the sensitivity analysis will find that there was no significant difference in the global syntax values between the ITN and OSM and Axial models but at the local level the additional network segments for pedestrian routes within the OSM data will provide greater network accuracy and syntax values that model the reality on the ground better than the Ordnance Survey ITN model. Furthermore it captures potential pedestrian routes that are not present in the other data sets. The work carried out seeks to understand whether Volunteered Geographic Information is a viable alternative to official mapping sources when creating models for analysis of small urban areas. If this proves to be the case such data would provide not only a cost effective alternative to commercially produced data but indeed a more reliable network model for the analysis to be carried out. Open source geographic data have the capability to improve and enrich space syntax analysis whilst removing high price barriers that commercial data sources impose

    Effects of Data Imputation Methods on Data Missingness in Data Mining

    Get PDF
    The purpose of this paper is to study theeffectiveness of data imputation methods in dealingwith data missingness in the data mining phase ofknowledge discovery in Database (KDD). Theapplication of data mining techniques without carefulconsideration of missing data can result into biasedresults and skewed conclusions. This research exploresthe impact of data missingness at various levels in KDDmodels employing neural networks as the primary datamining algorithm. Four of the most commonly utilizeddata imputation methods - Case Deletion, MeanSubstitution, Regression Imputation, and MultipleImputation were evalutated using Root Mean Square(RMS) Values, ANOVA Testing, T-tests, and Tukey’sHonestly Significant Difference Test to assess thedifferences of performance levels between variousKnowledge Discovery and Neural Network Models,both in the presence and absence of Missing Data

    Areas of Same Cardinal Direction

    Get PDF
    Cardinal directions, such as North, East, South, and West, are the foundation for qualitative spatial reasoning, a common field of GIS, Artificial Intelligence, and cognitive science. Such cardinal directions capture the relative spatial direction relation between a reference object and a target object, therefore, they are important search criteria in spatial databases. The projection-based model for such direction relations has been well investigated for point-like objects, yielding a relation algebra with strong inference power. The Direction Relation Matrix defines the simple region-to-region direction relations by approximating the reference object to a minimum bounding rectangle. Models that capture the direction between extended objects fall short when the two objects are close to each other. For instance, the forty-eight contiguous states of the US are colloquially considered to be South of Canada, yet they include regions that are to the North of some parts of Canada. This research considers the cardinal direction as a field that is distributed through space and may take on varying values depending on the location within a reference object. Therefore, the fundamental unit of space, the point, is used as a reference to form a point-based cardinal direction model. The model applies to capture the direction relation between point-to-region and region-to-region configurations. As such, the reference object is portioned into areas of same cardinal direction with respect to the target object. This thesis demonstrates there is a set of 106 cardinal point-to-region relations, which can be normalized by considering mirroring and 90° rotations, to a subset of 22 relations. The differentiating factor of the model is that a set of base relations defines the direction relation anywhere in the field, and the conceptual neighborhood graph of the base relations offers the opportunity to exploit the strong inference of point-based direction reasoning for simple regions of arbitrary shape. Considers the tiles and pockets of same cardinal direction, while a coarse model provides a union of all possible qualitative direction values between a reference region and a target region

    The Impact of Data Imputation Methodologies on Knowledge Discovery

    Get PDF
    The purpose of this research is to investigate the impact of Data Imputation Methodologies that are employed when a specific Data Mining algorithm is utilized within a KDD (Knowledge Discovery in Databases) process. This study will employ certain Knowledge Discovery processes that are widely accepted in both the academic and commercial worlds. Several Knowledge Discovery models will be developed utilizing secondary data containing known correct values. Tests will be conducted on the secondary data both before and after storing data instances with known results and then identifying imprecise data values. One of the integral stages in the accomplishment of successful Knowledge Discovery is the Data Mining phase. The actual Data Mining process deals significantly with prediction, estimation, classification, pattern recognition and the development of association rules. Neural Networks are the most commonly selected tools for Data Mining classification and prediction. Neural Networks employ various types of Transfer Functions when outputting data. The most commonly employed Transfer Function is the s-Sigmoid Function. Various Knowledge Discovery Models from various research and business disciplines were tested using this framework. However, missing and inconsistent data has been pervasive problems in the history of data analysis since the origin of data collection. Due to advancements in the capacities of data storage and the proliferation of computer software, more historical data is being collected and analyzed today than ever before. The issue of missing data must be addressed, since ignoring this problem can introduce bias into the models being evaluated and lead to inaccurate data mining conclusions. The objective of this research is to address the impact of Missing Data and Data Imputation on the Data Mining phase of Knowledge Discovery when Neural Networks are utilized when employing an s-Sigmoid Transfer function, and are confronted with Missing Data and Data Imputation methodologie

    The Impact of Data Imputation Methodologies on Knowledge Discovery

    Get PDF
    The purpose of this research is to investigate the impact of Data Imputation Methodologies that are employed when a specific Data Mining algorithm is utilized within a KDD (Knowledge Discovery in Databases) process. This study will employ certain Knowledge Discovery processes that are widely accepted in both the academic and commercial worlds. Several Knowledge Discovery models will be developed utilizing secondary data containing known correct values. Tests will be conducted on the secondary data both before and after storing data instances with known results and then identifying imprecise data values. One of the integral stages in the accomplishment of successful Knowledge Discovery is the Data Mining phase. The actual Data Mining process deals significantly with prediction, estimation, classification, pattern recognition and the development of association rules. Neural Networks are the most commonly selected tools for Data Mining classification and prediction. Neural Networks employ various types of Transfer Functions when outputting data. The most commonly employed Transfer Function is the s-Sigmoid Function. Various Knowledge Discovery Models from various research and business disciplines were tested using this framework. However, missing and inconsistent data has been pervasive problems in the history of data analysis since the origin of data collection. Due to advancements in the capacities of data storage and the proliferation of computer software, more historical data is being collected and analyzed today than ever before. The issue of missing data must be addressed, since ignoring this problem can introduce bias into the models being evaluated and lead to inaccurate data mining conclusions. The objective of this research is to address the impact of Missing Data and Data Imputation on the Data Mining phase of Knowledge Discovery when Neural Networks are utilized when employing an s-Sigmoid Transfer function, and are confronted with Missing Data and Data Imputation methodologie

    Street Network Centrality and Built Form Evolution in the Spatial Development of London's Periphery 1880-2013

    Get PDF
    This thesis presents a street network and built form analysis of the urbanisation of four peripheral areas of London as they transformed from satellite settlements to parts of the continuous urban fabric of London over 130 years. The analysis is carried out by applying and combining space syntax and GIS techniques to chart the changing structures of network centrality through time, and how this relates to the built form, as they co-evolved. Through these methods an understanding of the factors that have contributed to the current spatial form of the case studies is developed. In taking an historical view of the urbanisation of the fringes of the London this thesis unpacks the spatial characteristics of areas characterised as ‘suburban’, revealing the specific spatial and architectural forms they have developed. It is shown that peripheral areas cannot be characterised as generically suburban and great variation exists within this simplistic categorisation. The development of transport infrastructures based around motor vehicles are shown to be reflected in the transformation of built form, both at the household and community level, illustrating the interdependence of technological development, regional planning regimes and every- day life. Large-scale transport infrastructures that operate at a regional level are shown to have local impacts whilst local changes are shown to have cumulative effects that transform the spatial character of large areas. The analysis of the historical patterns and stages of urbanisation allow new insights into the contemporary city to be developed that are explicitly aware of the role of historical processes in shaping the spaces of the contemporary city and the environments that we experience today. It also enables questions about future adaptability to be approached with a better understanding of the emergence and evolution of peri-urban areas

    Spatial effects in stated preference studies for environmental valuation

    Get PDF
    Bergh, J.C.J.M. van den [Promotor]Brouwer, R. [Promotor

    Geographic representation in location intelligence problems analysis: the geo-element mapping chart

    No full text
    Purpose - The present research has three major aims: to examine the concept of geographic information in business application through a critical review of different definitions and conceptualization that, from several views, literature and business applied sectors present; to individuate a logical framework to support the decomposition of spatial analysis models used to support business decision making, and a conceptualization scheme to help the user/analyst to gain insight into geographic representation inherent in location intelligence applications; finally, to apply the framework proposed to some common location intelligence problem statements to evaluate its meaningfulness. Design/methodology/approach – This research critically reviews existing literature of business application of Geographic Information Systems; it adopts the Beguin-Thisse framework of geographic space to focus on how is representation of geography included in spatial analysis techniques and models used to afford location intelligence problems. The logical framework proposed is then applied to some analytical business approaches: trade area analysis models, retail location models, location allocation models, and spatial allocation models. Findings – This research has identified a logical framework, named geo-element mapping chart (GEMC), to support mapping and making practical evidence of the “geographic dimension” (distance, direction, connectivity, and shape) inside spatial analysis models used to explore some specific business problems. The general conclusions are that traditional spatial analysis approaches simplifies its representation of geography, using principally the “classic distance dimension”. The GEMC has showed that other dimensions, such as connectivity and shape, can be present in some models, but their practical conceptualization and successive implementation for more insightful spatial modelling approaches require multidisciplinary competencies and computational expertise. Research implications/limitations – The idea on which the framework proposed (GEMC) is based is that, for business applications, every spatial analysis models can be decomposed using some elementary model building blocks, which, subsequently, can contains in their definition a “geographic dimension” or represent an element of the geographic space upon which conceptually the model works. The GEMC has been applied only to some case studies, therefore its implementation need to be extended to evaluate other modelling contexts, such as spatial statistics and spatial econometrics, to provide more general considerations and coclusions. Practical implications – Understanding the use and the value of geography and geographic information in business decision making, i.e. the GEMC major purpose, can support further developments of specific GIS-based support tools and related spatial analysis techniques. The development of a framework to decompose models and then to make evidence of the representation of geographic elements and dimensions inherent in the problem, can support a more useful management of spatial analytical models, helping a potential user to build new location intelligence models by reusing existing modelling approaches with their “geographical meaning”, and facilitating a more intelligence model selection in a complex problem solving environment (such as Knowledge Based Spatial Decision Support Systems and Knowledge Based Planning Support Systems). In other words, the generalization of the GEMC application to other spatial analysis approaches used to model different location intelligence problem, could potentially help to build a kind of “library” (model library) of different approaches used to model several geographic component, inherent in business problems, that have in the spatial dimension an important variable of their definition and for their effective solution. Originality/value – This research organizes and proposes a framework of integration of the different definitions related to the use of geographic information and Geographic Information Systems in the business sector. It attempts to formalize and test in some specific contexts a logical approach to evaluate geographic representation in spatial analysis models used to support decision making processes. The GEMC is intended to be a flexible approach to highlight where geography comes into play during spatial models formulation. The dissertation offers an original applied examination of some issues that have an impact on many aspect of location intelligence applications. By adopting the notion of GEMC, this research provides a detailed analysis of some methodologies used to model specific spatial business problems. The author is not aware of this logical approach having being applied elsewhere in research or application
    corecore