1,463 research outputs found

    Decentralized or centralized production : impacts to the environment, industry, and the economy

    Get PDF
    Since product take-back is mandated in Europe, and has effects for producers worldwide including the U.S., designing efficient forward and reverse supply chain networks is becoming essential for business viability. Centralizing production facilities may reduce costs but perhaps not environmental impacts. Decentralizing a supply chain may reduce transportation environmental impacts but increase capital costs. Facility location strategies of centralization or decentralization are tested for companies with supply chains that both take back and manufacture products. Decentralized and centralized production systems have different effects on the environment, industry and the economy. Decentralized production systems cluster suppliers within the geographical market region that the system serves. Centralized production systems have many suppliers spread out that meet all market demand. The point of this research is to help further the understanding of company decision-makers about impacts to the environment and costs when choosing a decentralized or centralized supply chain organizational strategy. This research explores; what degree of centralization for a supply chain makes the most financial and environmental sense for siting facilities; and which factories are in the best location to handle the financial and environmental impacts of particular processing steps needed for product manufacture. This research considered two examples of facility location for supply chains when products are taken back; the theoretical case involved shoe resoling and a real world case study considered the location of operations for a company that reclaims multiple products for use as material inputs. For the theoretical example a centralized strategy to facility location was optimal: whereas for the case study a decentralized strategy to facility location was best. In conclusion, it is not possible to say that a centralized or decentralized strategy to facility location is in general best for a company that takes back products. Each company’s specific concerns, needs, and supply chain details will determine which degree of centralization creates the optimal strategy for siting their facilities

    Agglomerative hierarchical clustering algorithm for community detection in large-scale complex networks

    Get PDF
    Abstract: In this thesis several algorithms are proposed to compute efficiently high quality community structure in large-scale complex networks. First, a novel similarity measure that determines the structural similarity in a graph by dynamically diffusing and capturing information beyond the immediate neighborhood of connected nodes. This new similarity is modeled as an iterated function that can be solved by fixed point iteration in super-linear time and memory complexity, so it is able to analyze large-scale graphs. In order to show the advantages of the proposed similarity in the community detection task, we replace the local structural similarity used in the SCAN algorithm with the proposed similarity measure, improving the quality of the detected community structure and also reducing the sensitivity to the parameter ϵ\epsilon. Second, a novel fast heuristic algorithm for multi-scale and hierarchical community detection inspired on an agglomerative hierarchical clustering technique. This algorithm uses the Dynamic Structural Similarity in a heuristic agglomerative hierarchical algorithm, that does not merge only clusters with maximal similarity as in the classical hierarchical approach, but merges any cluster that does not meet a community definition passed by parameter with its most similar adjacent clusters. The algorithm computes the similarity between clusters at the same time is checking if each cluster meets the specified community definition. It is done in linear time complexity in terms of the number of cluster in the iteration. Since a complex network is a sparse graph, this approach has a super linear time complexity with respect to the size of the input in the average case scenario, making it suitable to be applied on large-scale complex networks. Third, an efficient algorithm to detect fuzzy and crisp overlapping community structure. This algorithm leverages the disjoint community structure generated by the heuristic algorithm proposed above. Three core elements have been proposed to compute the overlapping community structure: \emph{i)} A connectivity function that quantifies the density of connections of a node towards a disjoint community, that relies its computation on the Dynamic Structural Similarity measure. \emph{ii)} An ϵ\epsilon-Core community definition that increases the probability of identifying in-between communities in the disjoint community structure. \emph{iii)} A membership function to compute the soft partition from the core disjoint communities. Because this algorithm keeps the same computational complexity of the original disjoint algorithm, it is still applicable to large-scale graphs. Finally, an extensive experimentation is performed in order to test the properties, efficiency and efficacy of the proposed algorithms and to compare them with the state-of-the-art. The experimental results show that the proposed algorithms provide better trade-off among the quality of the detected community structure, computational complexity and usability, compared to the state-of-the-art.En esta tesis se proponen varios algoritmos para computar eficientemente estructura de comunidad de alta calidad en redes complejas de gran escala. Primero, se propone una nueva medida que determina la similitud estructural en un grafo mediante la difusión y captura de información mas allá de la vecindad inmediata de los nodos conectados que están siendo analizados. Esta nueva similitud está modelada como una función iterada que puede ser calculada por iteración a punto fijo en complejidad de tiempo y memoria super-lineal, por lo tanto puede utilizarse para analizar grafos de gran escala. Para mostrar las ventajas de la similitud estructural propuesta, se ha reemplazado la similitud estructural local utilizada en el algoritmo SCAN, con la similitud estructural dinámica, mejorando así la calidad de la estructura de comunidad detectada y también reduciendo la sensibilidad al parámetro ϵ\epsilon. Segundo, se propone un algoritmo heurístico novedoso para detección de comunidades jerárquicas multi-escala que está inspirado en una técnica de agrupamiento jerárquica aglomerante. Este algoritmo utiliza la similitud estructural dinámica en un algoritmo heurístico jerárquico algomerante, que no une solamente las comunidades con máxima similitud tal como en la técnica jerárquica clásica, sino que une cualquier comunidad que no cumple una definición de comunidad pasada como parametro, con sus comunidades vecinas con las cuales presenta mayor similitud. El algoritmo computa la similitud entre las comunidades a la vez que verifica si cumplen la definición de comunidad pasada como parámetro. Esto es hecho en tiempo lineal en términos del número de comunidades en la iteración. Ya que una red compleja es un grafo disperso, esta aproximación presenta una complejidad de tiempo super-lineal en el caso promedio con respecto al tamaño del grafo entrada, por lo tanto puede ser aplicada en redes complejas de gran escala. Tercero, se propone un algoritmo novedoso para detectar estructura de comunidad superpuesta, tanto difusa como nítida. Este algoritmo utiliza la estructura de comunidad disyunta generada por el algoritmo heurístico propuesto anteriormente. Se proponen tres componentes principales para computar la estructura de comunidad superpuesta. i) Una función de conectividad que cuantifica la densidad de conexiones de un vertice hacia una comunidad disyunta, y su computación está basada en los valores de la similitud estructural dinámica. ii) Una definición de comunidad llamada Comunidad ϵ\epsilon-Central que incrementa la probabilidad de detectar comunidades superpuestas preliminares en la estructura de comunidad disyunta. iii) Una función de probabilidad que computa la estructura de comunidad difusa a partir de la estructura de comunidad disyunta. Ya que este algoritmo presenta la misma complejidad computacional que el algoritmo original, entonces sigue siendo aplicable a redes complejas de gran escala. Finalmente, una experimentación extensiva ha sido desarrollada con el fin de probar las propiedades, eficacia y eficiencia de los algoritmos propuestos, y para compararlos con el estado del arte. Los resultados experimentales muestran que los algoritmos propuestos proveen un mejor balance entre calidad de la estructura de comunidad detectada, eficiencia de computación y facilidad de uso, comparados con el estado del arte.Maestrí

    A computer graphics approach to logistics strategy modelling

    Get PDF
    This thesis describes the development and application of a decision support system for logistics strategy modelling. The decision support system that is developed enables the modelling of logistics systems at a strategic level for any country or area in the world. The model runs on IBM PC or compatible computers under DOS (disk operating system). The decision support system uses colour graphics to represent the different physical functions of a logistics system. The graphics of the system is machine independent. The model displays on the screen the map of the area or country which is being considered for logistic planning. The decision support system is hybrid in term of algorithm. It employs optimisation for allocation. The customers are allocated by building a network path from customer to the source points taking into consideration all the production and throughput constraints on factories, distribution depots and transshipment points. The system uses computer graphic visually interactive heuristics to find the best possible location for distribution depots and transshipment points. In a one depot system it gives the optimum solution but where more than one depot is involved, the optimum solution is not guaranteed. The developed model is a cost-driven model. It represents all the logistics system costs in their proper form. Its solution very much depends on the relationship between all the costs. The locations of depots and transshipment points depend on the relationship between inbound and outbound transportation costs. The model has been validated on real world problems, some of which are described here. The advantages of such a decision support system for the formulation of a problem are discussed. Also discussed is the contribution of such an approach at the validation and solution presentation stages

    An Algorithmic Interpretation of Quantum Probability

    Get PDF
    The Everett (or relative-state, or many-worlds) interpretation of quantum mechanics has come under fire for inadequately dealing with the Born rule (the formula for calculating quantum probabilities). Numerous attempts have been made to derive this rule from the perspective of observers within the quantum wavefunction. These are not really analytic proofs, but are rather attempts to derive the Born rule as a synthetic a priori necessity, given the nature of human observers (a fact not fully appreciated even by all of those who have attempted such proofs). I show why existing attempts are unsuccessful or only partly successful, and postulate that Solomonoff's algorithmic approach to the interpretation of probability theory could clarify the problems with these approaches. The Sleeping Beauty probability puzzle is used as a springboard from which to deduce an objectivist, yet synthetic a priori framework for quantum probabilities, that properly frames the role of self-location and self-selection (anthropic) principles in probability theory. I call this framework "algorithmic synthetic unity" (or ASU). I offer no new formal proof of the Born rule, largely because I feel that existing proofs (particularly that of Gleason) are already adequate, and as close to being a formal proof as one should expect or want. Gleason's one unjustified assumption--known as noncontextuality--is, I will argue, completely benign when considered within the algorithmic framework that I propose. I will also argue that, to the extent the Born rule can be derived within ASU, there is no reason to suppose that we could not also derive all the other fundamental postulates of quantum theory, as well. There is nothing special here about the Born rule, and I suggest that a completely successful Born rule proof might only be possible once all the other postulates become part of the derivation. As a start towards this end, I show how we can already derive the essential content of the fundamental postulates of quantum mechanics, at least in outline, and especially if we allow some educated and well-motivated guesswork along the way. The result is some steps towards a coherent and consistent algorithmic interpretation of quantum mechanics
    corecore