58,966 research outputs found

    An ADMM Based Framework for AutoML Pipeline Configuration

    Full text link
    We study the AutoML problem of automatically configuring machine learning pipelines by jointly selecting algorithms and their appropriate hyper-parameters for all steps in supervised learning pipelines. This black-box (gradient-free) optimization with mixed integer & continuous variables is a challenging problem. We propose a novel AutoML scheme by leveraging the alternating direction method of multipliers (ADMM). The proposed framework is able to (i) decompose the optimization problem into easier sub-problems that have a reduced number of variables and circumvent the challenge of mixed variable categories, and (ii) incorporate black-box constraints along-side the black-box optimization objective. We empirically evaluate the flexibility (in utilizing existing AutoML techniques), effectiveness (against open source AutoML toolkits),and unique capability (of executing AutoML with practically motivated black-box constraints) of our proposed scheme on a collection of binary classification data sets from UCI ML& OpenML repositories. We observe that on an average our framework provides significant gains in comparison to other AutoML frameworks (Auto-sklearn & TPOT), highlighting the practical advantages of this framework

    Convex Relaxations for Gas Expansion Planning

    Full text link
    Expansion of natural gas networks is a critical process involving substantial capital expenditures with complex decision-support requirements. Given the non-convex nature of gas transmission constraints, global optimality and infeasibility guarantees can only be offered by global optimisation approaches. Unfortunately, state-of-the-art global optimisation solvers are unable to scale up to real-world size instances. In this study, we present a convex mixed-integer second-order cone relaxation for the gas expansion planning problem under steady-state conditions. The underlying model offers tight lower bounds with high computational efficiency. In addition, the optimal solution of the relaxation can often be used to derive high-quality solutions to the original problem, leading to provably tight optimality gaps and, in some cases, global optimal soluutions. The convex relaxation is based on a few key ideas, including the introduction of flux direction variables, exact McCormick relaxations, on/off constraints, and integer cuts. Numerical experiments are conducted on the traditional Belgian gas network, as well as other real larger networks. The results demonstrate both the accuracy and computational speed of the relaxation and its ability to produce high-quality solutions

    Robustness of Trans-European Gas Networks

    Full text link
    Here we uncover the load and fault-tolerant backbones of the trans-European gas pipeline network. Combining topological data with information on inter-country flows, we estimate the global load of the network and its tolerance to failures. To do this, we apply two complementary methods generalized from the betweenness centrality and the maximum flow. We find that the gas pipeline network has grown to satisfy a dual-purpose: on one hand, the major pipelines are crossed by a large number of shortest paths thereby increasing the efficiency of the network; on the other hand, a non-operational pipeline causes only a minimal impact on network capacity, implying that the network is error-tolerant. These findings suggest that the trans-European gas pipeline network is robust, i.e., error tolerant to failures of high load links.Comment: 11 pages, 8 figures (minor changes

    A mathematical framework for modelling and evaluating natural gas pipeline networks under hydrogen injection

    Get PDF
    This article presents the framework of a mathematical formulation for modelling and evaluating natural gas pipeline networks under hydrogen injection. The model development is based on gas transport through pipelines and compressors which compensate for the pressure drops by implying mainly the mass and energy balances on the basic elements of the network. The model was initially implemented for natural gas transport and the principle of extension for hydrogen-natural gas mixtures is presented. The objective is the treatment of the classical fuel minimizing problem in compressor stations. The optimization procedure has been formulated by means of a nonlinear technique within the General Algebraic Modelling System (GAMS) environment. This work deals with the adaptation of the current transmission networks of natural gas to the transport of hydrogen-natural gas mixtures. More precisely, the quantitative amount of hydrogen that can be added to natural gas can be determined. The studied pipeline network,initially proposed by Abbaspour et al. (2005) is revisited here for the case of hydrogen-natural gas mixtures. Typical quantitative results are presented, showing that the addition of hydrogen to natural gas decreases significantly the transmitted power : the maximum fraction of hydrogen that can be added to natural gas is around 6 mass percent for this example

    Very weak lensing in the CFHTLS Wide: Cosmology from cosmic shear in the linear regime

    Full text link
    We present an exploration of weak lensing by large-scale structure in the linear regime, using the third-year (T0003) CFHTLS Wide data release. Our results place tight constraints on the scaling of the amplitude of the matter power spectrum sigma_8 with the matter density Omega_m. Spanning 57 square degrees to i'_AB = 24.5 over three independent fields, the unprecedented contiguous area of this survey permits high signal-to-noise measurements of two-point shear statistics from 1 arcmin to 4 degrees. Understanding systematic errors in our analysis is vital in interpreting the results. We therefore demonstrate the percent-level accuracy of our method using STEP simulations, an E/B-mode decomposition of the data, and the star-galaxy cross correlation function. We also present a thorough analysis of the galaxy redshift distribution using redshift data from the CFHTLS T0003 Deep fields that probe the same spatial regions as the Wide fields. We find sigma_8(Omega_m/0.25)^0.64 = 0.785+-0.043 using the aperture-mass statistic for the full range of angular scales for an assumed flat cosmology, in excellent agreement with WMAP3 constraints. The largest physical scale probed by our analysis is 85 Mpc, assuming a mean redshift of lenses of 0.5 and a LCDM cosmology. This allows for the first time to constrain cosmology using only cosmic shear measurements in the linear regime. Using only angular scales theta> 85 arcmin, we find sigma_8(Omega_m/0.25)_lin^0.53 = 0.837+-0.084, which agree with the results from our full analysis. Combining our results with data from WMAP3, we find Omega_m=0.248+-0.019 and sigma_8 = 0.771+-0.029.Comment: 23 pages, 16 figures (A&A accepted

    Machine Learning at Microsoft with ML .NET

    Full text link
    Machine Learning is transitioning from an art and science into a technology available to every developer. In the near future, every application on every platform will incorporate trained models to encode data-based decisions that would be impossible for developers to author. This presents a significant engineering challenge, since currently data science and modeling are largely decoupled from standard software development processes. This separation makes incorporating machine learning capabilities inside applications unnecessarily costly and difficult, and furthermore discourage developers from embracing ML in first place. In this paper we present ML .NET, a framework developed at Microsoft over the last decade in response to the challenge of making it easy to ship machine learning models in large software applications. We present its architecture, and illuminate the application demands that shaped it. Specifically, we introduce DataView, the core data abstraction of ML .NET which allows it to capture full predictive pipelines efficiently and consistently across training and inference lifecycles. We close the paper with a surprisingly favorable performance study of ML .NET compared to more recent entrants, and a discussion of some lessons learned
    corecore