402,262 research outputs found

    Typology based on three density variables central to Spacematrix using cluster analysis

    Full text link
    [EN] Since the publication of the book ‘Spacematrix. Space, density and urban form’ (Berghauser Pont and Haupt, 2010), the Spacematrix method has been linked back to its theoretical foundations by Steadman (2013), is further developed using the measure of accessible density to arrive at a density measure that more closely relates to the environment as experienced by people moving through the city (Berghauser Pont and Marcus, 2014) which then is used to arrive at a multi-scalar density typology (Berghauser Pont et al. 2017). This paper will take yet another step in the development of the Spacematrix method by including the measure of network density in the classification which until now was not used to its full potential. Important for successful classification is the ability to ascertain the fundamental characteristics on which the classification is to be based where the work of Berghauser Pont and Haupt (2010) will be followed addressing three key variables: Floor Space Index (FSI), Ground Space Index (GSI) and Network density (N) where especially the last was not fully included in the earlier work. Besides a typology based on these three variables, this paper will also result in a robust statistical method that can later be used on larger samples for city-scale comparisons. Two statistical methods are tested: hierarchical clustering and centroid-based clustering and besides a general discussion about their strong and weak points, the paper shows that the hierarchical method is more convincing in distinguishing differences in both building type and street pattern that is especially captured with Network density (N). As this method is not useful for large datasets we propose a combination of the two clustering methods as the way forward.Berghauser Pont, M.; Olsson, J. (2018). Typology based on three density variables central to Spacematrix using cluster analysis. En 24th ISUF International Conference. Book of Papers. Editorial Universitat Politècnica de València. 1337-1348. https://doi.org/10.4995/ISUF2017.2017.5319OCS1337134

    Network Representation Learning Guided by Partial Community Structure

    Get PDF
    Network Representation Learning (NRL) is an effective way to analyze large scale networks (graphs). In general, it maps network nodes, edges, subgraphs, etc. onto independent vectors in a low dimension space, thus facilitating network analysis tasks. As community structure is one of the most prominent mesoscopic structure properties of real networks, it is necessary to preserve community structure of networks during NRL. In this paper, the concept of k-step partial community structure is defined and two Partial Community structure Guided Network Embedding (PCGNE) methods, based on two popular NRL algorithms (DeepWalk and node2vec respectively), for node representation learning are proposed. The idea behind this is that it is easier and more cost-effective to find a higher quality 1-step partial community structure than a higher quality whole community structure for networks; the extracted partial community information is then used to guide random walks in DeepWalk or node2vec. As a result, the learned node representations could preserve community structure property of networks more effectively. The two proposed algorithms and six state-of-the-art NRL algorithms were examined through multi-label classification and (inner community) link prediction on eight synthesized networks: one where community structure property could be controlled, and one real world network. The results suggested that the two PCGNE methods could improve the performance of their own based algorithm significantly and were competitive for node representation learning. Especially, comparing against used baseline algorithms, PCGNE methods could capture overlapping community structure much better, and thus could achieve better performance for multi-label classification on networks that have more overlapping nodes and/or larger overlapping memberships

    Evolving Large-Scale Data Stream Analytics based on Scalable PANFIS

    Full text link
    Many distributed machine learning frameworks have recently been built to speed up the large-scale data learning process. However, most distributed machine learning used in these frameworks still uses an offline algorithm model which cannot cope with the data stream problems. In fact, large-scale data are mostly generated by the non-stationary data stream where its pattern evolves over time. To address this problem, we propose a novel Evolving Large-scale Data Stream Analytics framework based on a Scalable Parsimonious Network based on Fuzzy Inference System (Scalable PANFIS), where the PANFIS evolving algorithm is distributed over the worker nodes in the cloud to learn large-scale data stream. Scalable PANFIS framework incorporates the active learning (AL) strategy and two model fusion methods. The AL accelerates the distributed learning process to generate an initial evolving large-scale data stream model (initial model), whereas the two model fusion methods aggregate an initial model to generate the final model. The final model represents the update of current large-scale data knowledge which can be used to infer future data. Extensive experiments on this framework are validated by measuring the accuracy and running time of four combinations of Scalable PANFIS and other Spark-based built in algorithms. The results indicate that Scalable PANFIS with AL improves the training time to be almost two times faster than Scalable PANFIS without AL. The results also show both rule merging and the voting mechanisms yield similar accuracy in general among Scalable PANFIS algorithms and they are generally better than Spark-based algorithms. In terms of running time, the Scalable PANFIS training time outperforms all Spark-based algorithms when classifying numerous benchmark datasets.Comment: 20 pages, 5 figure

    Multiscale Discriminant Saliency for Visual Attention

    Full text link
    The bottom-up saliency, an early stage of humans' visual attention, can be considered as a binary classification problem between center and surround classes. Discriminant power of features for the classification is measured as mutual information between features and two classes distribution. The estimated discrepancy of two feature classes very much depends on considered scale levels; then, multi-scale structure and discriminant power are integrated by employing discrete wavelet features and Hidden markov tree (HMT). With wavelet coefficients and Hidden Markov Tree parameters, quad-tree like label structures are constructed and utilized in maximum a posterior probability (MAP) of hidden class variables at corresponding dyadic sub-squares. Then, saliency value for each dyadic square at each scale level is computed with discriminant power principle and the MAP. Finally, across multiple scales is integrated the final saliency map by an information maximization rule. Both standard quantitative tools such as NSS, LCC, AUC and qualitative assessments are used for evaluating the proposed multiscale discriminant saliency method (MDIS) against the well-know information-based saliency method AIM on its Bruce Database wity eye-tracking data. Simulation results are presented and analyzed to verify the validity of MDIS as well as point out its disadvantages for further research direction.Comment: 16 pages, ICCSA 2013 - BIOCA sessio
    • …
    corecore