3,118 research outputs found

    Direct numerical integration for multi-loop integrals

    Full text link
    We present a method to construct a suitable contour deformation in loop momentum space for multi-loop integrals. This contour deformation can be used to perform the integration for multi-loop integrals numerically. The integration can be performed directly in loop momentum space without the introduction of Feynman or Schwinger parameters. The method can be applied to finite multi-loop integrals and to divergent multi-loop integrals with suitable subtraction terms. The algorithm extends techniques from the one-loop case to the multi-loop case. Examples at two and three loops are discussed explicitly.Comment: 20 pages, v2: version to be published, v3: acknowledgement adde

    Direct contour deformation with arbitrary masses in the loop

    Full text link
    We present a method, which constructs a suitable deformation vector in loop momentum space, when the loop integration is done numerically with the help of the subtraction method. The method presented here extends previously discussed techniques from the massless case to the general case of arbitrary masses in the loop.Comment: 18 pages, version to be publishe

    From Reports to Maps

    Get PDF
    In this paper, we will sketch a project in progress. The project aims at an application of a command and control system. The application is meant to process military reports written in natural language. It exploits computer linguistic techniques, especially information extraction and ontological augmentation. A prototype has already be completed. A real world application of report processing has to go beyond pure syntactic parsing. Semantic analysis is needed and the meaning of the report has to be constructed. Even more, the meaning has to be represented in a format such that it can be visualized within the so called ``common operational picture'' (COP). The COP is an interactive map displaying information. COP standards are provided by NATO. Since military operations of our days -- war operations as well as peace-keeping and nation building ones -- involve forces of many nations, the COP serves as main tool for synchronizing actions and plans. The paper at hand will provide some insights what kind of problems come along if language processing has to result in map visualization. It also will describe some solutions to overcome these problems

    Near-Optimal Approximate Shortest Paths and Transshipment in Distributed and Streaming Models

    Full text link
    We present a method for solving the transshipment problem - also known as uncapacitated minimum cost flow - up to a multiplicative error of 1+ε1 + \varepsilon in undirected graphs with non-negative edge weights using a tailored gradient descent algorithm. Using O~()\tilde{O}(\cdot) to hide polylogarithmic factors in nn (the number of nodes in the graph), our gradient descent algorithm takes O~(ε2)\tilde O(\varepsilon^{-2}) iterations, and in each iteration it solves an instance of the transshipment problem up to a multiplicative error of polylogn\operatorname{polylog} n. In particular, this allows us to perform a single iteration by computing a solution on a sparse spanner of logarithmic stretch. Using a randomized rounding scheme, we can further extend the method to finding approximate solutions for the single-source shortest paths (SSSP) problem. As a consequence, we improve upon prior work by obtaining the following results: (1) Broadcast CONGEST model: (1+ε)(1 + \varepsilon)-approximate SSSP using O~((n+D)ε3)\tilde{O}((\sqrt{n} + D)\varepsilon^{-3}) rounds, where D D is the (hop) diameter of the network. (2) Broadcast congested clique model: (1+ε)(1 + \varepsilon)-approximate transshipment and SSSP using O~(ε2)\tilde{O}(\varepsilon^{-2}) rounds. (3) Multipass streaming model: (1+ε)(1 + \varepsilon)-approximate transshipment and SSSP using O~(n)\tilde{O}(n) space and O~(ε2)\tilde{O}(\varepsilon^{-2}) passes. The previously fastest SSSP algorithms for these models leverage sparse hop sets. We bypass the hop set construction; computing a spanner is sufficient with our method. The above bounds assume non-negative edge weights that are polynomially bounded in nn; for general non-negative weights, running times scale with the logarithm of the maximum ratio between non-zero weights.Comment: Accepted to SIAM Journal on Computing. Preliminary version in DISC 2017. Abstract shortened to fit arXiv's limitation to 1920 character

    Prediction of nitrogen purification in wastewater with Machine learning

    Get PDF
    Wastewater treatment plants are necessary for avoiding environmental pollution by humans. Last year the European Commission proposed a new directive with stricter requirements for wastewater treatment plants. To meet the proposed regulatory changes regarding the allowed amount of pollution, many wastewater treatment plants need expensive facility upgrades. These upgrades may increase land use. Additionally, the taxpayers will most likely have to pay for the expenses related to meet the new requirements for the wastewater treatment plants. One possible solution for reducing the cost and land use could be to optimize the processes used today with new technology. This study will investigate if it is possible to use machine learning to predict the amount of nitrate contained in the wastewater after denitrification. For this purpose, historical data from two different denitrification processes from one wastewater treatment plant is utilized. The first process dosed methanol based on measurements of nitrate, oxygen, and flow before denitrification, while the second process dosed methanol based on measurements of nitrate, oxygen, and flow before denitrification and previous nitrate out measurements. The data were collected between 30.11.2022 and 05.01.2023. One statistical approach and two machine learning models were tested for predicting the amount of nitrate contained in the wastewater after denitrification. The statistical method is a seasonal autoregressive integrated moving average with exogenous variables (SARIMAX) and the machine learning approaches are the long short term memory (LSTM) and extreme gradient boosting (XGBoost) algorithms. For the first process all models showed similar results with SARIMAX as the best model with an MSE, RMSE and MAE of 0.15, 0.39 and 0.29 respectively. For the second process the SARIMAX model outperformed the LSTM and XGBoost with MSE,RMSE and MAE of 2.09, 1.45 and 1.24 respectively. Our research show that it is significantly easier to get good performing models for process one than two. We are presenting some aspects which should be further investigated to obtain a solution that is ready to be put into use
    corecore