15 research outputs found

    On Spectrum Assignment in Elastic Optical Tree-Networks

    Get PDF
    International audiencePour répondre à la demande croissante du trafic d'Internet, une nouvelle génération de réseaux optiques est en cours de développement ; les réseaux optiques élastiques (EONs). La technologie EON permet d'utiliser le spectre optique de manière efficace et flexible. Cette flexibilité promet de résoudre les difficultés liées à la croissance et l'hétérogénéité du trafic. Toutefois, elle rend le problème d'allocation de ressources plus complexe. Dans ce papier, nous traitons le problème d'allocation de spectre dans les réseaux optiques élastiques en arbre. Dans ce type de réseau , bien que le routage soit fixé, l'allocation de spectre est NP-difficile. Nous présentons des résultats de difficulté et d'approximation pour des cas spéciaux où le réseau est une étoile ou un arbre binaire

    On Spectrum Assignment in Elastic Optical Tree-Networks

    Get PDF
    International audiencePour répondre à la demande croissante du trafic d'Internet, une nouvelle génération de réseaux optiques est en cours de développement ; les réseaux optiques élastiques (EONs). La technologie EON permet d'utiliser le spectre optique de manière efficace et flexible. Cette flexibilité promet de résoudre les difficultés liées à la croissance et l'hétérogénéité du trafic. Toutefois, elle rend le problème d'allocation de ressources plus complexe. Dans ce papier, nous traitons le problème d'allocation de spectre dans les réseaux optiques élastiques en arbre. Dans ce type de réseau , bien que le routage soit fixé, l'allocation de spectre est NP-difficile. Nous présentons des résultats de difficulté et d'approximation pour des cas spéciaux où le réseau est une étoile ou un arbre binaire

    About equivalent interval colorings of weighted graphs

    Get PDF
    AbstractGiven a graph G=(V,E) with strictly positive integer weights ωi on the vertices i∈V, a k-interval coloring of G is a function I that assigns an interval I(i)⊆{1,…,k} of ωi consecutive integers (called colors) to each vertex i∈V. If two adjacent vertices x and y have common colors, i.e. I(i)∩I(j)≠0̸ for an edge [i,j] in G, then the edge [i,j] is said conflicting. A k-interval coloring without conflicting edges is said legal. The interval coloring problem (ICP) is to determine the smallest integer k, called interval chromatic number of G and denoted χint(G), such that there exists a legal k-interval coloring of G. For a fixed integer k, the k-interval graph coloring problem (k-ICP) is to determine a k-interval coloring of G with a minimum number of conflicting edges. The ICP and k-ICP generalize classical vertex coloring problems where a single color has to be assigned to each vertex (i.e., ωi=1 for all vertices i∈V).Two k-interval colorings I1 and I2 are said equivalent if there is a permutation π of the integers 1,…,k such that ℓ∈I1(i) if and only if π(ℓ)∈I2(i) for all vertices i∈V. As for classical vertex coloring, the efficiency of algorithms that solve the ICP or the k-ICP can be increased by avoiding considering equivalent k-interval colorings, assuming that they can be identified very quickly. To this purpose, we define and prove a necessary and sufficient condition for the equivalence of two k-interval colorings. We then show how a simple tabu search algorithm for the k-ICP can possibly be improved by forbidding the visit of equivalent solutions

    Efficient Memory Management for GPU-based Deep Learning Systems

    Get PDF
    GPU (graphics processing unit) has been used for many data-intensive applications. Among them, deep learning systems are one of the most important consumer systems for GPU nowadays. As deep learning applications impose deeper and larger models in order to achieve higher accuracy, memory management becomes an important research topic for deep learning systems, given that GPU has limited memory size. Many approaches have been proposed towards this issue, e.g., model compression and memory swapping. However, they either degrade the model accuracy or require a lot of manual intervention. In this paper, we propose two orthogonal approaches to reduce the memory cost from the system perspective. Our approaches are transparent to the models, and thus do not affect the model accuracy. They are achieved by exploiting the iterative nature of the training algorithm of deep learning to derive the lifetime and read/write order of all variables. With the lifetime semantics, we are able to implement a memory pool with minimal fragments. However, the optimization problem is NP-complete. We propose a heuristic algorithm that reduces up to 13.3% of memory compared with Nvidia's default memory pool with equal time complexity. With the read/write semantics, the variables that are not in use can be swapped out from GPU to CPU to reduce the memory footprint. We propose multiple swapping strategies to automatically decide which variable to swap and when to swap out (in), which reduces the memory cost by up to 34.2% without communication overhead

    Efficient Memory Management for GPU-based Deep Learning Systems

    Full text link
    GPU (graphics processing unit) has been used for many data-intensive applications. Among them, deep learning systems are one of the most important consumer systems for GPU nowadays. As deep learning applications impose deeper and larger models in order to achieve higher accuracy, memory management becomes an important research topic for deep learning systems, given that GPU has limited memory size. Many approaches have been proposed towards this issue, e.g., model compression and memory swapping. However, they either degrade the model accuracy or require a lot of manual intervention. In this paper, we propose two orthogonal approaches to reduce the memory cost from the system perspective. Our approaches are transparent to the models, and thus do not affect the model accuracy. They are achieved by exploiting the iterative nature of the training algorithm of deep learning to derive the lifetime and read/write order of all variables. With the lifetime semantics, we are able to implement a memory pool with minimal fragments. However, the optimization problem is NP-complete. We propose a heuristic algorithm that reduces up to 13.3% of memory compared with Nvidia's default memory pool with equal time complexity. With the read/write semantics, the variables that are not in use can be swapped out from GPU to CPU to reduce the memory footprint. We propose multiple swapping strategies to automatically decide which variable to swap and when to swap out (in), which reduces the memory cost by up to 34.2% without communication overhead

    Generation of random chordal graphs using subtrees of a tree

    Get PDF
    Chordal graphs form one of the most studied graph classes. Several graph problems that are NP-hard in general become solvable in polynomial time on chordal graphs, whereas many others remain NP-hard. For a large group of problems among the latter, approximation algorithms, parameterized algorithms, and algorithms with moderately exponential or sub-exponential running time have been designed. Chordal graphs have also gained increasing interest during the recent years in the area of enumeration algorithms. Being able to test these algorithms on instances of chordal graphs is crucial for understanding the concepts of tractability of hard problems on graph classes. Unfortunately, only few published papers give algorithms for generating chordal graphs. Even in these papers, only very few methods aim for generating a large variety of chordal graphs. Surprisingly, none of these methods is directly based on the “intersection of subtrees of a tree” characterization of chordal graphs. In this paper, we give an algorithm for generating chordal graphs, based on the characterization that a graph is chordal if and only if it is the intersection graph of subtrees of a tree. Upon generating a random host tree, we give and test various methods that generate subtrees of the host tree. We compare our methods to one another and to existing ones for generating chordal graphs. Our experiments show that one of our methods is able to generate the largest variety of chordal graphs in terms of maximal clique sizes. Moreover, two of our subtree generation methods result in an overall complexity of our generation algorithm that is the best possible time complexity for a method generating the entire node set of subtrees in an “intersection of subtrees of a tree” representation. The instances corresponding to the results presented in this paper, and also a set of relatively small-sized instances are made available online.publishedVersio
    corecore