718 research outputs found

    Global Optimization for a Class of Nonlinear Sum of Ratios Problem

    Get PDF
    We present a branch and bound algorithm for globally solving the sum of ratios problem. In this problem, each term in the objective function is a ratio of two functions which are the sums of the absolute values of affine functions with coefficients. This problem has an important application in financial optimization, but the global optimization algorithm for this problem is still rare in the literature so far. In the algorithm we presented, the branch and bound search undertaken by the algorithm uses rectangular partitioning and takes place in a space which typically has a much smaller dimension than the space to which the decision variables of this problem belong. Convergence of the algorithm is shown. At last, some numerical examples are given to vindicate our conclusions

    Partial Identification with Proxy of Latent Confoundings via Sum-of-ratios Fractional Programming

    Full text link
    Due to the unobservability of confoundings, there has been widespread concern about how to compute causality quantitatively. To address this challenge, proxy-based negative control approaches have been commonly adopted, where auxiliary outcome variables W\bm{W} are introduced as the proxy of confoundings U\bm{U}. However, these approaches rely on strong assumptions such as reversibility, completeness, or bridge functions. These assumptions lack intuitive empirical interpretation and solid verification techniques, hence their applications in the real world are limited. For instance, these approaches are inapplicable when the transition matrix P(W∣U)P(\bm{W} \mid \bm{U}) is irreversible. In this paper, we focus on a weaker assumption called the partial observability of P(W∣U)P(\bm{W} \mid \bm{U}). We develop a more general single-proxy negative control method called Partial Identification via Sum-of-ratios Fractional Programming (PI-SFP). It is a global optimization algorithm based on the branch-and-bound strategy, aiming to provide the valid bound of the causal effect. In the simulation, PI-SFP provides promising numerical results and fills in the blank spots that can not be handled in the previous literature, such as we have partial information of P(W∣U)P(\bm{W} \mid \bm{U})

    Efficient Globally Optimal Resource Allocation in Wireless Interference Networks

    Get PDF
    Radio resource allocation in communication networks is essential to achieve optimal performance and resource utilization. In modern interference networks the corresponding optimization problems are often nonconvex and their solution requires significant computational resources. Hence, practical systems usually use algorithms with no or only weak optimality guarantees for complexity reasons. Nevertheless, asserting the quality of these methods requires the knowledge of the globally optimal solution. State-of-the-art global optimization approaches mostly employ Tuy's monotonic optimization framework which has some major drawbacks, especially when dealing with fractional objectives or complicated feasible sets. In this thesis, two novel global optimization frameworks are developed. The first is based on the successive incumbent transcending (SIT) scheme to avoid numerical problems with complicated feasible sets. It inherently differentiates between convex and nonconvex variables, preserving the low computational complexity in the number of convex variables without the need for cumbersome decomposition methods. It also treats fractional objectives directly without the need of Dinkelbach's algorithm. Benchmarks show that it is several orders of magnitude faster than state-of-the-art algorithms. The second optimization framework is named mixed monotonic programming (MMP) and generalizes monotonic optimization. At its core is a novel bounding mechanism accompanied by an efficient BB implementation that helps exploit partial monotonicity without requiring a reformulation in terms of difference of increasing (DI) functions. While this often leads to better bounds and faster convergence, the main benefit is its versatility. Numerical experiments show that MMP can outperform monotonic programming by a few orders of magnitude, both in run time and memory consumption. Both frameworks are applied to maximize throughput and energy efficiency (EE) in wireless interference networks. In the first application scenario, MMP is applied to evaluate the EE gain rate splitting might provide over point-to-point codes in Gaussian interference channels. In the second scenario, the SIT based algorithm is applied to study throughput and EE for multi-way relay channels with amplify-and-forward relaying. In both cases, rate splitting gains of up to 4.5% are observed, even though some limiting assumptions have been made

    Detecting cosmological reionization on large scales through the 21 cm HI line

    Get PDF
    This thesis presents the development of new techniques for measuring the mean redshifted 21 cm line of neutral hydrogen during reionization. This is called the 21 cm cosmological reionization monopole. Successful observations could identify the nature of the first stars and test theories of galaxy and large-scale structure formation. The goal was to specify, construct and calibrate a portable radio telescope to measure the 21 cm monopole in the frequency range 114 MHz to 228 MHz, which corresponds to the redshift range 11.5 > z > 5.2. The chosen approach combined a frequency independent antenna with a digital correlation spectrometer to form a correlation radiometer. The system was calibrated against injected noise and against a modelled galactic foreground. Components were specified for calibration of the sky spectrum to 1 mK/MHz relative accuracy. Comparing simulated and measured spectra showed that bandpass calibration is limited to 11 K, that is 1% of the foreground emission, due to larger than expected frequency dependence of the antenna pattern. Overall calibration, including additive contributions from the system and the radio foreground, is limited to 60 K. This is 160 times larger than the maximum possible monopole amplitude at redshift eight. Future work will refine and extend the system known as the Cosmological Reionization Experiment Mark I (CoRE Mk I)

    The QUIET Instrument

    Get PDF
    The Q/U Imaging ExperimenT (QUIET) is designed to measure polarization in the Cosmic Microwave Background, targeting the imprint of inflationary gravitational waves at large angular scales (~ 1 degree). Between 2008 October and 2010 December, two independent receiver arrays were deployed sequentially on a 1.4 m side-fed Dragonian telescope. The polarimeters which form the focal planes use a highly compact design based on High Electron Mobility Transistors (HEMTs) that provides simultaneous measurements of the Stokes parameters Q, U, and I in a single module. The 17-element Q-band polarimeter array, with a central frequency of 43.1 GHz, has the best sensitivity (69 uK sqrt(s)) and the lowest instrumental systematic errors ever achieved in this band, contributing to the tensor-to-scalar ratio at r < 0.1. The 84-element W-band polarimeter array has a sensitivity of 87 uK sqrt(s) at a central frequency of 94.5 GHz. It has the lowest systematic errors to date, contributing at r < 0.01. The two arrays together cover multipoles in the range l= 25-975. These are the largest HEMT-based arrays deployed to date. This article describes the design, calibration, performance of, and sources of systematic error for the instrument

    Supporting research and advanced development - Space programs summary, volume 3

    Get PDF
    Research project reports on advanced systems analysis and engineering, telecommunication, solid propellants, guidance and control, and data system

    A high-fidelity approach to conceptual design

    Get PDF
    We created a new methodology to perform conceptual design analysis on aircraft, using off-the-shelf, high-fidelity software tools to explore the project design space, including important preliminary design factors and thereby producing a more robust result which is less subject to compromise at later design stages. We claim that this analysis can be performed in one hour with commonly available computation resources, and therefore is applicable to conceptual design. We used the case study of a supersonic transport jet to develop these methods. For this application, we used Solidworks to create a parameterized three-dimensional CAD solid to define the exterior geometry of the aircraft, and create populations of design candidates. We used STAR-CCM+ to perform an automated fluid flow analysis of these candidates, using three-dimensional, viscous, turbulent finite volume analysis and incorporating internal engine performance characteristics. We then used MATLAB to collect the data produced by these analyses, compute additional results of interest, and quantify the design space represented by a population of candidates. We heavily automated the steps of this process, to allow large studies or optimization frameworks to be implemented. Our results show that the method produces a data set which is much more rich than conventional conceptual design techniques. The method captures many interactions between aircraft systems which are normally not quantified until later phases of design: aerodynamic interactions between external lifting surfaces and between the external body and internal engine performance, and how structural constraints affect wing performance. We also produce detailed information about the aircraft static stability. Further, the method is able to produce these results with commonly available computer hardware within the one-hour timeframe we allow for a conceptual design analysis
    • …
    corecore