1,204 research outputs found

    Summertime partitioning and budget of NOycompounds in the troposphere over Alaska and Canada: ABLE 3B

    Get PDF
    As part of NASA's Arctic Boundary Layer Expedition 3A and 3B field measurement programs, measurements of NO(x) HNO31, PAN, PPN, and NOy were made in the middle to lower troposphere over Alaska and Canada during the summers of 1988 and 1990. These measurements are used to assess the degree of closure within the reactive odd nitrogen (NxOy) budget through the comparison of the values of NOy measured with a catalytic convertor to the sum of individually measured NOy(i) compounds (i.e., Sigma NOy(i) = NOx + HNO3 + PAN + PPN). Significant differences were observed between the various study regions. In the lower 6 km of the troposphere over Alaska and the Hudson Bay lowlands of Canada a significant traction of the NOy budget (30 to 60 per cent) could not be accounted for by the measured Sigma NOy(i). This deficit in the NOy budget is about 100 to 200 parts per trillion by volume (pptv) in the lower troposphere (0.15 to 3 km) and about 200 to 400 pptv in the middle free troposphere (3 to 6.2 km). Conversely, the NOy budget in the northern Labrador and Quebec regions or Canada is almost totally accounted for within the combined measurement uncertainties of NOy and the various NOy(i) compounds. A substantial portion of the NOx budget's 'missing compounds' appears to be coupled to the photochemical and/or dynamical parameters influencing the tropospheric oxidative potential over these regions. A combination of factors are suggested as the causes for the variability observed in the NOy budget. In addition, the apparent stability of compounds represented by the NOy budget deficit in the lower-attitude range questions the ability of these compounds to participate as reversible reservoirs for "active" odd nitrogen and suggest that some portion of the NOy budget may consist of relatively unreactive nitrogencontaining compounds. Bei der Rationalisierung von Kommissioniersystemen besteht bei vielen Unternehmen noch Nachholbedarf. Dies ergab eine Umfrage des Fraunhofer-Instituts für Materialfluss und Logistik in Dortmund bei ca. 800 Unternehmen. Keins der Unternehmen setzt Kommissionierautomaten ein, die Voraussetzungen für durchgehende Automatisierung fehlen

    A Distributed Multilevel Force-directed Algorithm

    Full text link
    The wide availability of powerful and inexpensive cloud computing services naturally motivates the study of distributed graph layout algorithms, able to scale to very large graphs. Nowadays, to process Big Data, companies are increasingly relying on PaaS infrastructures rather than buying and maintaining complex and expensive hardware. So far, only a few examples of basic force-directed algorithms that work in a distributed environment have been described. Instead, the design of a distributed multilevel force-directed algorithm is a much more challenging task, not yet addressed. We present the first multilevel force-directed algorithm based on a distributed vertex-centric paradigm, and its implementation on Giraph, a popular platform for distributed graph algorithms. Experiments show the effectiveness and the scalability of the approach. Using an inexpensive cloud computing service of Amazon, we draw graphs with ten million edges in about 60 minutes.Comment: Appears in the Proceedings of the 24th International Symposium on Graph Drawing and Network Visualization (GD 2016

    Partitioner Selection with EASE to Optimize Distributed Graph Processing

    Full text link
    For distributed graph processing on massive graphs, a graph is partitioned into multiple equally-sized parts which are distributed among machines in a compute cluster. In the last decade, many partitioning algorithms have been developed which differ from each other with respect to the partitioning quality, the run-time of the partitioning and the type of graph for which they work best. The plethora of graph partitioning algorithms makes it a challenging task to select a partitioner for a given scenario. Different studies exist that provide qualitative insights into the characteristics of graph partitioning algorithms that support a selection. However, in order to enable automatic selection, a quantitative prediction of the partitioning quality, the partitioning run-time and the run-time of subsequent graph processing jobs is needed. In this paper, we propose a machine learning-based approach to provide such a quantitative prediction for different types of edge partitioning algorithms and graph processing workloads. We show that training based on generated graphs achieves high accuracy, which can be further improved when using real-world data. Based on the predictions, the automatic selection reduces the end-to-end run-time on average by 11.1% compared to a random selection, by 17.4% compared to selecting the partitioner that yields the lowest cut size, and by 29.1% compared to the worst strategy, respectively. Furthermore, in 35.7% of the cases, the best strategy was selected.Comment: To appear at IEEE International Conference on Data Engineering (ICDE 2023
    • …
    corecore