832 research outputs found

    Global Optimization for a Class of Nonlinear Sum of Ratios Problem

    Get PDF
    We present a branch and bound algorithm for globally solving the sum of ratios problem. In this problem, each term in the objective function is a ratio of two functions which are the sums of the absolute values of affine functions with coefficients. This problem has an important application in financial optimization, but the global optimization algorithm for this problem is still rare in the literature so far. In the algorithm we presented, the branch and bound search undertaken by the algorithm uses rectangular partitioning and takes place in a space which typically has a much smaller dimension than the space to which the decision variables of this problem belong. Convergence of the algorithm is shown. At last, some numerical examples are given to vindicate our conclusions

    A New Global Optimization Algorithm for Solving a Class of Nonconvex Programming Problems

    Get PDF
    A new two-part parametric linearization technique is proposed globally to a class of nonconvex programming problems (NPP). Firstly, a two-part parametric linearization method is adopted to construct the underestimator of objective and constraint functions, by utilizing a transformation and a parametric linear upper bounding function (LUBF) and a linear lower bounding function (LLBF) of a natural logarithm function and an exponential function with e as the base, respectively. Then, a sequence of relaxation lower linear programming problems, which are embedded in a branch-and-bound algorithm, are derived in an initial nonconvex programming problem. The proposed algorithm is converged to global optimal solution by means of a subsequent solution to a series of linear programming problems. Finally, some examples are given to illustrate the feasibility of the presented algorithm

    (Global) Optimization: Historical notes and recent developments

    Get PDF
    Recent developments in (Global) Optimization are surveyed in this paper. We collected and commented quite a large number of recent references which, in our opinion, well represent the vivacity, deepness, and width of scope of current computational approaches and theoretical results about nonconvex optimization problems. Before the presentation of the recent developments, which are subdivided into two parts related to heuristic and exact approaches, respectively, we briefly sketch the origin of the discipline and observe what, from the initial attempts, survived, what was not considered at all as well as a few approaches which have been recently rediscovered, mostly in connection with machine learning

    Computational applications in stochastic operations research

    Get PDF
    Several computational applications in stochastic operations research are presented, where, for each application, a computational engine is used to achieve results that are otherwise overly tedious by hand calculations, or in some cases mathematically intractable. Algorithms and code are developed and implemented with specific emphasis placed on achieving exact results and substantiated via Monte Carlo simulation. The code for each application is provided in the software language utilized and algorithms are available for coding in another environment. The topics include univariate and bivariate nonparametric random variate generation using a piecewise-linear cumulative distribution, deriving exact statistical process control chart constants for non-normal sampling, testing probability distribution conformance to Benford\u27s law, and transient analysis of M/M/s queueing systems. The nonparametric random variate generation chapters provide the modeler with a method of generating univariate and bivariate samples when only observed data is available. The method is completely nonparametric and is capable of mimicking multimodal joint distributions. The algorithm is black-box, where no decisions are required from the modeler in generating variates for simulation. The statistical process control chart constant chapter develops constants for select non-normal distributions, and provides tabulated results for researchers who have identified a given process as non-normal The constants derived are bias correction factors for the sample range and sample standard deviation. The Benford conformance testing chapter offers the Kolmogorov-Smirnov test as an alternative to the standard chi-square goodness-of-fit test when testing whether leading digits of a data set are distributed according to Benford\u27s law. The alternative test has the advantage of being an exact test for all sample sizes, removing the usual sample size restriction involved with the chi-square goodness-of-fit test. The transient queueing analysis chapter develops and automates the construction of the sojourn time distribution for the nth customer in an M/M/s queue with k customers initially present at time 0 (k ≥ 0) without the usual limit on traffic intensity, rho \u3c 1, providing an avenue to conduct transient analysis on various measures of performance for a given initial number of customers in the system. It also develops and automates the construction of the sojourn time joint probability distribution function for pairs of customers, allowing the calculation of the exact covariance between customer sojourn times

    MEASUREMENT OF (ALPHA, NEUTRON) REACTIONS AND DEVELOPMENT OF ANALYSIS TOOLS WITH THE MAJORANA DEMONSTRATOR

    Get PDF
    Neutrinoless double-beta decay (0νββ) is a hypothetical nuclear transition which, if observed, would prove that neutrinos are Majorana particles. In addition, the decay rate could provide an effective neutrino mass scale. The decay violates lepton number conservation and could offer a potential path to explain the matter-antimatter asymmetry in the universe via leptogenesis. However, the experimental observation of this decay is very challenging and would require excellent energy resolution of detectors, low background levels, and high exposure. The MAJORANA DEMONSTRATOR experiment searches for this decay in 76Ge using P-type Point Contact (PPC) High Purity Germanium (HPGe) detectors. In addition, the DEMONSTRATOR is probing a broad range of physics, including both Standard Model (SM) physics and Beyond the Standard Model (BSM) physics, thanks to the experiment\u27s excellent energy performance, low analysis energy threshold, and low background. This dissertation will begin with an overview of neutrinos and neutrinoless double-beta decay physics. It will then briefly outline the MAJORANA DEMONSTRATOR experiment and its result on 0νββ search. The DEMONSTRATOR has achieved the best-in-field energy resolution, which is the result of intrinsic properties of detectors and analysis efforts. A brief description of the energy calibration procedure and the energy systematic study of DEMONSTRATOR will be presented. Then, the dissertation will describe an experimental study of 13C(α,n)16O reactions in MAJORANA\u27s calibration data. The findings and impacts in low-background experiments will be presented. Finally, it will describe the machine learning approach of analyzing waveforms to discriminate signal-like and background-like events for 0νββ searches

    High-precision computation of uniform asymptotic expansions for special functions

    Get PDF
    In this dissertation, we investigate new methods to obtain uniform asymptotic expansions for the numerical evaluation of special functions to high-precision. We shall first present the theoretical and computational fundamental aspects required for the development and ultimately implementation of such methods. Applying some of these methods, we obtain efficient new convergent and uniform expansions for numerically evaluating the confluent hypergeometric functions and the Lerch transcendent at high-precision. In addition, we also investigate a new scheme of computation for the generalized exponential integral, obtaining on the fastest and most robust implementations in double-precision floating-point arithmetic. In this work, we aim to combine new developments in asymptotic analysis with fast and effective open-source implementations. These implementations are comparable and often faster than current open-source and commercial stateof-the-art software for the evaluation of special functions.Esta tesis presenta nuevos métodos para obtener expansiones uniformes asintóticas, para la evaluación numérica de funciones especiales en alta precisión. En primer lugar, se introducen fundamentos teóricos y de carácter computacional necesarios para el desarrollado y posterior implementación de tales métodos. Aplicando varios de dichos métodos, se obtienen nuevas expansiones uniformes convergentes para la evaluación numérica de las funciones hipergeométricas confluentes y de la función transcendental de Lerch. Por otro lado, se estudian nuevos esquemas de computo para evaluar la integral exponencial generalizada, desarrollando una de las implementaciones más eficientes y robustas en aritmética de punto flotante de doble precisión. En este trabajo, se combinan nuevos desarrollos en análisis asintótico con implementaciones rigurosas, distribuidas en código abierto. Las implementaciones resultantes son comparables, y en ocasiones superiores, a las soluciones comerciales y de código abierto actuales, que representan el estado de la técnica en el campo de la evaluación de funciones especiales

    High-precision computation of uniform asymptotic expansions for special functions

    Get PDF
    In this dissertation, we investigate new methods to obtain uniform asymptotic expansions for the numerical evaluation of special functions to high-precision. We shall first present the theoretical and computational fundamental aspects required for the development and ultimately implementation of such methods. Applying some of these methods, we obtain efficient new convergent and uniform expansions for numerically evaluating the confluent hypergeometric functions and the Lerch transcendent at high-precision. In addition, we also investigate a new scheme of computation for the generalized exponential integral, obtaining on the fastest and most robust implementations in double-precision floating-point arithmetic. In this work, we aim to combine new developments in asymptotic analysis with fast and effective open-source implementations. These implementations are comparable and often faster than current open-source and commercial stateof-the-art software for the evaluation of special functions.Esta tesis presenta nuevos métodos para obtener expansiones uniformes asintóticas, para la evaluación numérica de funciones especiales en alta precisión. En primer lugar, se introducen fundamentos teóricos y de carácter computacional necesarios para el desarrollado y posterior implementación de tales métodos. Aplicando varios de dichos métodos, se obtienen nuevas expansiones uniformes convergentes para la evaluación numérica de las funciones hipergeométricas confluentes y de la función transcendental de Lerch. Por otro lado, se estudian nuevos esquemas de computo para evaluar la integral exponencial generalizada, desarrollando una de las implementaciones más eficientes y robustas en aritmética de punto flotante de doble precisión. En este trabajo, se combinan nuevos desarrollos en análisis asintótico con implementaciones rigurosas, distribuidas en código abierto. Las implementaciones resultantes son comparables, y en ocasiones superiores, a las soluciones comerciales y de código abierto actuales, que representan el estado de la técnica en el campo de la evaluación de funciones especiales.Postprint (published version

    From n-grams to n-sets: A Fuzzy-Logic-Based Approach to Shakespearian Authorship Attribution.

    Get PDF
    This thesis surveys the principles of Fuzzy Logic as they have been applied in the last three decades in the micro-electronic field and, in the context of resolving problems of authorship verification and attribution shows how these principles can assist with the detection of stylistic similarities or dissimilarities of an anonymous, disputed play to an author’s general or patterns-based known style. The main stylistic markers are the counts of semantic sets of 100 individual words-tokens and an index of counts of these words’ frequencies (a cosine index), as found in the first extract of approximately 10,000 words of each of 27 well attributed Shakespearian plays. Based on these markers, their geometrical representation, fuzzy modelling and on thee ground of Set Theory and Boolean Algebra, in the core part of this thesis three Mamdani (Type-1) genre-based Fuzzy Expert Systems were built for the detection of degrees (measured on a scale from 0 to 1) of Shakespearianness of disputed and, probably, co-authored plays of the early modern English period. Each of these three expert systems is composed of seven input and two output variables that are associated through a set of approximately 30 to 40 rules. There is a detailed description of the properties of the three expert systems’ inference mechanisms and the various experimentation phases. There is also an indicative graphical analysis of the phases of the experimentation and a thorough explanation of terms, such as partial truths membership, approximate reasoning and output centroids on an X-axis of a two-dimensional space. Throughout the thesis there is an extensive demonstration of various Fuzzy Logic techniques, including Sugeno-ANFIS (adaptive neuro-fuzzy inference system), with which the style of Shakespeare can be modelled in order to compare it with well attributed plays of other authors or plays that are not included in the strict Shakespearian canon of the selected 27 well-attributed, sole authored plays. In addition, other relevant issues of stylometric concern are discussed, such as the investigation and classification of known ‘problem’ and disputed plays through holistic classifiers (irrespective of genre). The results of the experimentation advocate the use of this novel, automated and computer simulation-based method of classification in the stylometric field for various purposes. In fact, the three models have succeeded in detecting the low Shakespearianness of non Shakespearian plays and the results they provided for anonymous, disputed plays are in conformance with the general evidence of historical scholarship. Therefore, the original contribution of this thesis is to define fully functional automated fuzzy classifiers of Shakespearianness. The result of this discovery is that we now know that the principles of fuzzy modelling can be applied for the creation of Fuzzy Expert Stylistic Classifiers and the concomitant detection of degrees of similarity of a play under scrutiny with the general or patterns-based known style of a specific author (in our case, Shakespeare). Furthermore, this thesis shows that, given certain premises, counts of words’ frequencies and counts of semantic sets of words can be employed satisfactorily for stylistic discrimination

    Computational methods and software systems for dynamics and control of large space structures

    Get PDF
    Two key areas of crucial importance to the computer-based simulation of large space structures are discussed. The first area involves multibody dynamics (MBD) of flexible space structures, with applications directed to deployment, construction, and maneuvering. The second area deals with advanced software systems, with emphasis on parallel processing. The latest research thrust in the second area involves massively parallel computers
    • …
    corecore