3,784 research outputs found

    On-the-fly Data Assessment for High Throughput X-ray Diffraction Measurement

    Full text link
    Investment in brighter sources and larger and faster detectors has accelerated the speed of data acquisition at national user facilities. The accelerated data acquisition offers many opportunities for discovery of new materials, but it also presents a daunting challenge. The rate of data acquisition far exceeds the current speed of data quality assessment, resulting in less than optimal data and data coverage, which in extreme cases forces recollection of data. Herein, we show how this challenge can be addressed through development of an approach that makes routine data assessment automatic and instantaneous. Through extracting and visualizing customized attributes in real time, data quality and coverage, as well as other scientifically relevant information contained in large datasets is highlighted. Deployment of such an approach not only improves the quality of data but also helps optimize usage of expensive characterization resources by prioritizing measurements of highest scientific impact. We anticipate our approach to become a starting point for a sophisticated decision-tree that optimizes data quality and maximizes scientific content in real time through automation. With these efforts to integrate more automation in data collection and analysis, we can truly take advantage of the accelerating speed of data acquisition

    The N-K Problem in Power Grids: New Models, Formulations and Numerical Experiments (extended version)

    Get PDF
    Given a power grid modeled by a network together with equations describing the power flows, power generation and consumption, and the laws of physics, the so-called N-k problem asks whether there exists a set of k or fewer arcs whose removal will cause the system to fail. The case where k is small is of practical interest. We present theoretical and computational results involving a mixed-integer model and a continuous nonlinear model related to this question.Comment: 40 pages 3 figure

    Data Science and Ebola

    Get PDF
    Data Science---Today, everybody and everything produces data. People produce large amounts of data in social networks and in commercial transactions. Medical, corporate, and government databases continue to grow. Sensors continue to get cheaper and are increasingly connected, creating an Internet of Things, and generating even more data. In every discipline, large, diverse, and rich data sets are emerging, from astrophysics, to the life sciences, to the behavioral sciences, to finance and commerce, to the humanities and to the arts. In every discipline people want to organize, analyze, optimize and understand their data to answer questions and to deepen insights. The science that is transforming this ocean of data into a sea of knowledge is called data science. This lecture will discuss how data science has changed the way in which one of the most visible challenges to public health is handled, the 2014 Ebola outbreak in West Africa.Comment: Inaugural lecture Leiden Universit

    A GPU-accelerated Branch-and-Bound Algorithm for the Flow-Shop Scheduling Problem

    Get PDF
    Branch-and-Bound (B&B) algorithms are time intensive tree-based exploration methods for solving to optimality combinatorial optimization problems. In this paper, we investigate the use of GPU computing as a major complementary way to speed up those methods. The focus is put on the bounding mechanism of B&B algorithms, which is the most time consuming part of their exploration process. We propose a parallel B&B algorithm based on a GPU-accelerated bounding model. The proposed approach concentrate on optimizing data access management to further improve the performance of the bounding mechanism which uses large and intermediate data sets that do not completely fit in GPU memory. Extensive experiments of the contribution have been carried out on well known FSP benchmarks using an Nvidia Tesla C2050 GPU card. We compared the obtained performances to a single and a multithreaded CPU-based execution. Accelerations up to x100 are achieved for large problem instances

    A Simulated Annealing Method to Cover Dynamic Load Balancing in Grid Environment

    Get PDF
    High-performance scheduling is critical to the achievement of application performance on the computational grid. New scheduling algorithms are in demand for addressing new concerns arising in the grid environment. One of the main phases of scheduling on a grid is related to the load balancing problem therefore having a high-performance method to deal with the load balancing problem is essential to obtain a satisfactory high-performance scheduling. This paper presents SAGE, a new high-performance method to cover the dynamic load balancing problem by means of a simulated annealing algorithm. Even though this problem has been addressed with several different approaches only one of these methods is related with simulated annealing algorithm. Preliminary results show that SAGE not only makes it possible to find a good solution to the problem (effectiveness) but also in a reasonable amount of time (efficiency)

    Efficiency Resource Allocation for Device-to-Device Underlay Communication Systems: A Reverse Iterative Combinatorial Auction Based Approach

    Full text link
    Peer-to-peer communication has been recently considered as a popular issue for local area services. An innovative resource allocation scheme is proposed to improve the performance of mobile peer-to-peer, i.e., device-to-device (D2D), communications as an underlay in the downlink (DL) cellular networks. To optimize the system sum rate over the resource sharing of both D2D and cellular modes, we introduce a reverse iterative combinatorial auction as the allocation mechanism. In the auction, all the spectrum resources are considered as a set of resource units, which as bidders compete to obtain business while the packages of the D2D pairs are auctioned off as goods in each auction round. We first formulate the valuation of each resource unit, as a basis of the proposed auction. And then a detailed non-monotonic descending price auction algorithm is explained depending on the utility function that accounts for the channel gain from D2D and the costs for the system. Further, we prove that the proposed auction-based scheme is cheat-proof, and converges in a finite number of iteration rounds. We explain non-monotonicity in the price update process and show lower complexity compared to a traditional combinatorial allocation. The simulation results demonstrate that the algorithm efficiently leads to a good performance on the system sum rate.Comment: 26 pages, 6 fgures; IEEE Journals on Selected Areas in Communications, 201

    High-throughput electronic band structure calculations: challenges and tools

    Full text link
    The article is devoted to the discussion of the high-throughput approach to band structures calculations. We present scientific and computational challenges as well as solutions relying on the developed framework (Automatic Flow, AFLOW/ACONVASP). The key factors of the method are the standardization and the robustness of the procedures. Two scenarios are relevant: 1) independent users generating databases in their own computational systems (off-line approach) and 2) teamed users sharing computational information based on a common ground (on-line approach). Both cases are integrated in the framework: for off-line approaches, the standardization is automatic and fully integrated for the 14 Bravais lattices, the primitive and conventional unit cells, and the coordinates of the high symmetry k-path in the Brillouin zones. For on-line tasks, the framework offers an expandable web interface where the user can prepare and set up calculations following the proposed standard. Few examples of band structures are included. LSDA+U parameters (U, J) are also presented for Nd, Sm, and Eu.Comment: 16 pages, 48 figures, http://materials.duke.edu
    • …
    corecore