1,844 research outputs found

    Optimizing Constrained Subtrees of Trees

    Get PDF
    Given a tree G = (V, E) and a weight function defined on subsets of its nodes, we consider two associated problems. The first, called the "rooted subtree problem", is to find a maximum weight subtree, with a specified root, from a given set of subtrees. The second problem, called "the subtree packing problem", is to find a maximum weight packing of node disjoint subtrees chosen from a given set of subtrees, where the value of each subtree may depend on its root. We show that the complexity status of both problems is related, and that the subtree packing problem is polynomial if and only if each rooted subtree problem is polynomial. In addition we show that the convex hulls of the feasible solutions to both problems are related: the convex hull of solutions to the packing problem is given by "pasting together" the convex hulls of the rooted subtree problems. We examine in detail the case where the set of feasible subtrees rooted at node i consists of all subtrees with at most k nodes. For this case we derive valid inequalities, and specify the convex hull when k < 4

    Asymmetries arising from the space-filling nature of vascular networks

    Full text link
    Cardiovascular networks span the body by branching across many generations of vessels. The resulting structure delivers blood over long distances to supply all cells with oxygen via the relatively short-range process of diffusion at the capillary level. The structural features of the network that accomplish this density and ubiquity of capillaries are often called space-filling. There are multiple strategies to fill a space, but some strategies do not lead to biologically adaptive structures by requiring too much construction material or space, delivering resources too slowly, or using too much power to move blood through the system. We empirically measure the structure of real networks (18 humans and 1 mouse) and compare these observations with predictions of model networks that are space-filling and constrained by a few guiding biological principles. We devise a numerical method that enables the investigation of space-filling strategies and determination of which biological principles influence network structure. Optimization for only a single principle creates unrealistic networks that represent an extreme limit of the possible structures that could be observed in nature. We first study these extreme limits for two competing principles, minimal total material and minimal path lengths. We combine these two principles and enforce various thresholds for balance in the network hierarchy, which provides a novel approach that highlights the trade-offs faced by biological networks and yields predictions that better match our empirical data.Comment: 17 pages, 15 figure

    Lossless and near-lossless source coding for multiple access networks

    Get PDF
    A multiple access source code (MASC) is a source code designed for the following network configuration: a pair of correlated information sequences {X-i}(i=1)(infinity), and {Y-i}(i=1)(infinity) is drawn independent and identically distributed (i.i.d.) according to joint probability mass function (p.m.f.) p(x, y); the encoder for each source operates without knowledge of the other source; the decoder jointly decodes the encoded bit streams from both sources. The work of Slepian and Wolf describes all rates achievable by MASCs of infinite coding dimension (n --> infinity) and asymptotically negligible error probabilities (P-e((n)) --> 0). In this paper, we consider the properties of optimal instantaneous MASCs with finite coding dimension (n 0) performance. The interest in near-lossless codes is inspired by the discontinuity in the limiting rate region at P-e((n)) = 0 and the resulting performance benefits achievable by using near-lossless MASCs as entropy codes within lossy MASCs. Our central results include generalizations of Huffman and arithmetic codes to the MASC framework for arbitrary p(x, y), n, and P-e((n)) and polynomial-time design algorithms that approximate these optimal solutions

    A Constrained Sequential-Lamination Algorithm for the Simulation of Sub-Grid Microstructure in Martensitic Materials

    Full text link
    We present a practical algorithm for partially relaxing multiwell energy densities such as pertain to materials undergoing martensitic phase transitions. The algorithm is based on sequential lamination, but the evolution of the microstructure during a deformation process is required to satisfy a continuity constraint, in the sense that the new microstructure should be reachable from the preceding one by a combination of branching and pruning operations. All microstructures generated by the algorithm are in static and configurational equilibrium. Owing to the continuity constrained imposed upon the microstructural evolution, the predicted material behavior may be path-dependent and exhibit hysteresis. In cases in which there is a strict separation of micro and macrostructural lengthscales, the proposed relaxation algorithm may effectively be integrated into macroscopic finite-element calculations at the subgrid level. We demonstrate this aspect of the algorithm by means of a numerical example concerned with the indentation of an Cu-Al-Ni shape memory alloy by a spherical indenter.Comment: 27 pages with 9 figures. To appear in: Computer Methods in Applied Mechanics and Engineering. New version incorporates minor revisions from revie

    SHADHO: Massively Scalable Hardware-Aware Distributed Hyperparameter Optimization

    Full text link
    Computer vision is experiencing an AI renaissance, in which machine learning models are expediting important breakthroughs in academic research and commercial applications. Effectively training these models, however, is not trivial due in part to hyperparameters: user-configured values that control a model's ability to learn from data. Existing hyperparameter optimization methods are highly parallel but make no effort to balance the search across heterogeneous hardware or to prioritize searching high-impact spaces. In this paper, we introduce a framework for massively Scalable Hardware-Aware Distributed Hyperparameter Optimization (SHADHO). Our framework calculates the relative complexity of each search space and monitors performance on the learning task over all trials. These metrics are then used as heuristics to assign hyperparameters to distributed workers based on their hardware. We first demonstrate that our framework achieves double the throughput of a standard distributed hyperparameter optimization framework by optimizing SVM for MNIST using 150 distributed workers. We then conduct model search with SHADHO over the course of one week using 74 GPUs across two compute clusters to optimize U-Net for a cell segmentation task, discovering 515 models that achieve a lower validation loss than standard U-Net.Comment: 10 pages, 6 figure
    • …
    corecore