10,116 research outputs found

    Controlling Complexity in Spatial Modelling

    Get PDF
    The present complexity approach is based on two assumptions: A1: measurability of deviations of outcomes with respect to reference values; A2 : extension of A1 to multi-set analysis. Complexity is then defined in terms of multi-set deviation compared to single-set ones; an interpretation is given in terms of information costs; examples show the relevance of the interpretation. As a useful by-product the explicit solution of the quadratic part of the discrete logistic ? one of the examples ? is derived; a set of pij-numbers is introduced, and a workable method for generating them exposed. Extensions are considered, in particular controllability. A further application is then proposed, namely to hypergraph conflict analysis, in particular conflict resolution. Many decisional conflicts at the spatial level can be axiomatised in this form; it is shown how the use of particular structures ? in the mathematical sense of that word ? of the problem allows of reducing greatly the degree of complexity of the problem, and hence the difficulty of finding a solution.Chaos, complexity, conflict, dynamics, hypergraphs, information

    A Pairwise Comparison Matrix Framework for Large-Scale Decision Making

    Get PDF
    abstract: A Pairwise Comparison Matrix (PCM) is used to compute for relative priorities of criteria or alternatives and are integral components of widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, a PCM suffers from several issues limiting its application to large-scale decision problems, specifically: (1) to the curse of dimensionality, that is, a large number of pairwise comparisons need to be elicited from a decision maker (DM), (2) inconsistent and (3) imprecise preferences maybe obtained due to the limited cognitive power of DMs. This dissertation proposes a PCM Framework for Large-Scale Decisions to address these limitations in three phases as follows. The first phase proposes a binary integer program (BIP) to intelligently decompose a PCM into several mutually exclusive subsets using interdependence scores. As a result, the number of pairwise comparisons is reduced and the consistency of the PCM is improved. Since the subsets are disjoint, the most independent pivot element is identified to connect all subsets. This is done to derive the global weights of the elements from the original PCM. The proposed BIP is applied to both AHP and ANP methodologies. However, it is noted that the optimal number of subsets is provided subjectively by the DM and hence is subject to biases and judgement errors. The second phase proposes a trade-off PCM decomposition methodology to decompose a PCM into a number of optimally identified subsets. A BIP is proposed to balance the: (1) time savings by reducing pairwise comparisons, the level of PCM inconsistency, and (2) the accuracy of the weights. The proposed methodology is applied to the AHP to demonstrate its advantages and is compared to established methodologies. In the third phase, a beta distribution is proposed to generalize a wide variety of imprecise pairwise comparison distributions via a method of moments methodology. A Non-Linear Programming model is then developed that calculates PCM element weights which maximizes the preferences of the DM as well as minimizes the inconsistency simultaneously. Comparison experiments are conducted using datasets collected from literature to validate the proposed methodology.Dissertation/ThesisPh.D. Industrial Engineering 201

    BigFCM: Fast, Precise and Scalable FCM on Hadoop

    Full text link
    Clustering plays an important role in mining big data both as a modeling technique and a preprocessing step in many data mining process implementations. Fuzzy clustering provides more flexibility than non-fuzzy methods by allowing each data record to belong to more than one cluster to some degree. However, a serious challenge in fuzzy clustering is the lack of scalability. Massive datasets in emerging fields such as geosciences, biology and networking do require parallel and distributed computations with high performance to solve real-world problems. Although some clustering methods are already improved to execute on big data platforms, but their execution time is highly increased for large datasets. In this paper, a scalable Fuzzy C-Means (FCM) clustering named BigFCM is proposed and designed for the Hadoop distributed data platform. Based on the map-reduce programming model, it exploits several mechanisms including an efficient caching design to achieve several orders of magnitude reduction in execution time. Extensive evaluation over multi-gigabyte datasets shows that BigFCM is scalable while it preserves the quality of clustering

    A bi-level model of dynamic traffic signal control with continuum approximation

    Get PDF
    This paper proposes a bi-level model for traffic network signal control, which is formulated as a dynamic Stackelberg game and solved as a mathematical program with equilibrium constraints (MPEC). The lower-level problem is a dynamic user equilibrium (DUE) with embedded dynamic network loading (DNL) sub-problem based on the LWR model (Lighthill and Whitham, 1955; Richards, 1956). The upper-level decision variables are (time-varying) signal green splits with the objective of minimizing network-wide travel cost. Unlike most existing literature which mainly use an on-and-off (binary) representation of the signal controls, we employ a continuum signal model recently proposed and analyzed in Han et al. (2014), which aims at describing and predicting the aggregate behavior that exists at signalized intersections without relying on distinct signal phases. Advantages of this continuum signal model include fewer integer variables, less restrictive constraints on the time steps, and higher decision resolution. It simplifies the modeling representation of large-scale urban traffic networks with the benefit of improved computational efficiency in simulation or optimization. We present, for the LWR-based DNL model that explicitly captures vehicle spillback, an in-depth study on the implementation of the continuum signal model, as its approximation accuracy depends on a number of factors and may deteriorate greatly under certain conditions. The proposed MPEC is solved on two test networks with three metaheuristic methods. Parallel computing is employed to significantly accelerate the solution procedure
    • …
    corecore