3,317 research outputs found

    Preserving the Work of Mitchell/Giurgola Associates

    Get PDF

    A powerful heuristic for telephone gossiping

    Get PDF
    A refined heuristic for computing schedules for gossiping in the telephone model is presented. The heuristic is fast: for a network with n nodes and m edges, requiring R rounds for gossiping, the running time is O(R n log(n) m) for all tested classes of graphs. This moderate time consumption allows to compute gossiping schedules for networks with more than 10,000 PUs and 100,000 connections. The heuristic is good: in practice the computed schedules never exceed the optimum by more than a few rounds. The heuristic is versatile: it can also be used for broadcasting and more general information dispersion patterns. It can handle both the unit-cost and the linear-cost model. Actually, the heuristic is so good, that for CCC, shuffle-exchange, butterfly de Bruijn, star and pancake networks the constructed gossiping schedules are better than the best theoretically derived ones. For example, for gossiping on a shuffle-exchange network with 2^{13} PUs, the former upper bound was 49 rounds, while our heuristic finds a schedule requiring 31 rounds. Also for broadcasting the heuristic improves on many formerly known results. A second heuristic, works even better for CCC, butterfly, star and pancake networks. For example, with this heuristic we found that gossiping on a pancake network with 7! PUs can be performed in 15 rounds, 2 fewer than achieved by the best theoretical construction. This second heuristic is less versatile than the first, but by refined search techniques it can tackle even larger problems, the main limitation being the storage capacity. Another advantage is that the constructed schedules can be represented concisely

    New genetic resources for mammalian developmental biologists

    Get PDF
    The utilization of homologous recombination in embryonic stem cells as a means to generate mice carrying pre-determined modifications of genomic sequences has revolutionized the study of developmental biology. Recognizing the potential efficiencies that can be obtained by high-throughput production at centralized technology centers, a number of large-scale efforts for generating mice with targeted mutations have been funded. These programs are reaching fruition, and a variety of libraries of embryonic stem cells with defined mutations are now available

    Spectacular Role of Electron Correlation in the Hyperfine Interactions in 2D5/2^2D_{5/2} States in Alkaline Earth Ions

    Get PDF
    The low-lying n(=3,4,5)d 2D5/2^2D_{5/2} states alkaline earth ions are of vital importance in a number of different physical applications. The hyperfine structure constants of these states are characterized by unusually strong electron correlation effects. Relativistic coupled-cluster theory has been employed to carry out {\it ab initio} calculations of these constants. The role of the all order core-polarization effects was found to be decisive in obtaining good agreement of the results of our calculations with accurate measurements. The present work is an apt demonstration of the power of the coupled-cluster method to cope with strongly interacting configurations.Comment: Submitted to Physical Review Letters, 3 figures and 5 table

    Smoothed Complexity Theory

    Get PDF
    Smoothed analysis is a new way of analyzing algorithms introduced by Spielman and Teng (J. ACM, 2004). Classical methods like worst-case or average-case analysis have accompanying complexity classes, like P and AvgP, respectively. While worst-case or average-case analysis give us a means to talk about the running time of a particular algorithm, complexity classes allows us to talk about the inherent difficulty of problems. Smoothed analysis is a hybrid of worst-case and average-case analysis and compensates some of their drawbacks. Despite its success for the analysis of single algorithms and problems, there is no embedding of smoothed analysis into computational complexity theory, which is necessary to classify problems according to their intrinsic difficulty. We propose a framework for smoothed complexity theory, define the relevant classes, and prove some first hardness results (of bounded halting and tiling) and tractability results (binary optimization problems, graph coloring, satisfiability). Furthermore, we discuss extensions and shortcomings of our model and relate it to semi-random models.Comment: to be presented at MFCS 201

    Refugees and the City: UN-Habitat’s New Urban Agenda

    Get PDF
    Special protection for refugees and displaced persons should be part of countries’ housing policies

    Transient wall shear stress estimation in coronary bifurcations using convolutional neural networks

    Full text link
    Background and Objective: Haemodynamic metrics, such as blood flow induced shear stresses at the inner vessel lumen, are associated with the development and progression of coronary artery disease. Understanding these metrics may therefore improve the assessment of an individual's coronary disease risk. However, the calculation of such luminal Wall Shear Stress (WSS) using traditional Computational Fluid Dynamics (CFD) methods is relatively slow and computationally expensive. As a result, CFD based haemodynamic computation is not suitable for integrated and large-scale use in clinical settings. Methods: In this work, deep learning techniques are proposed as an alternative method to CFD, whereby luminal WSS magnitude can be predicted in coronary bifurcations throughout the cardiac cycle based on the steady state solution (which takes <120 seconds to calculate including preprocessing), vessel geometry and additional global features. The deep learning model is trained on a dataset of 101 patient-specific and 2626 synthetic left main bifurcation models with 26 separate patient-specific cases used as the test set. Results: The model showed high fidelity predictions with <5% (normalised against mean WSS magnitude) deviation to CFD derived values as the gold-standard method, while being orders of magnitude faster with on average <2 minutes versus 3 hours computation for transient CFD. Conclusions: This method therefore offers a new approach to substantially reduce the computational cost involved in, for example, large-scale population studies of coronary haemodynamic metrics, and may therefore open the pathway for future clinical integration
    • …
    corecore