58 research outputs found

    A Euclidean Distance Matrix Model for Convex Clustering

    Full text link
    Clustering has been one of the most basic and essential problems in unsupervised learning due to various applications in many critical fields. The recently proposed sum-of-nums (SON) model by Pelckmans et al. (2005), Lindsten et al. (2011) and Hocking et al. (2011) has received a lot of attention. The advantage of the SON model is the theoretical guarantee in terms of perfect recovery, established by Sun et al. (2018). It also provides great opportunities for designing efficient algorithms for solving the SON model. The semismooth Newton based augmented Lagrangian method by Sun et al. (2018) has demonstrated its superior performance over the alternating direction method of multipliers (ADMM) and the alternating minimization algorithm (AMA). In this paper, we propose a Euclidean distance matrix model based on the SON model. An efficient majorization penalty algorithm is proposed to solve the resulting model. Extensive numerical experiments are conducted to demonstrate the efficiency of the proposed model and the majorization penalty algorithm.Comment: 32 pages, 3 figures, 3 table

    Conditional Gradient Algorithms for Rank-One Matrix Approximations with a Sparsity Constraint

    Full text link
    The sparsity constrained rank-one matrix approximation problem is a difficult mathematical optimization problem which arises in a wide array of useful applications in engineering, machine learning and statistics, and the design of algorithms for this problem has attracted intensive research activities. We introduce an algorithmic framework, called ConGradU, that unifies a variety of seemingly different algorithms that have been derived from disparate approaches, and allows for deriving new schemes. Building on the old and well-known conditional gradient algorithm, ConGradU is a simplified version with unit step size and yields a generic algorithm which either is given by an analytic formula or requires a very low computational complexity. Mathematical properties are systematically developed and numerical experiments are given.Comment: Minor changes. Final version. To appear in SIAM Revie

    Advances in multidimensional unfolding

    Get PDF
    Meerdimensionale ontvouwing is een analyse techniek die afbeeldingen maakt van twee sets van objecten, bijvoorbeeld van personen en producten, gebaseerd op de voorkeuren van de personen voor die producten. De afstanden tussen de personen en de producten in de afbeelding dienen zo goed mogelijk te corresponderen met deze voorkeuren en wel zo dat een kleine afstand overeenkomt met een grote voorkeur, terwijl een grote afstand correspondeert met een geringe voorkeur. Ontvouwing heeft echter sinds zijn conceptie in de jaren zestig te maken met het zogenaamde degeneratieprobleem, waardoor de oplossingen perfect zijn in termen van de verliesfunctie (de afstanden geven de voorkeuren perfect weer), maar die volstrekt onbruikbaar zijn in termen van interpretatie (de perfecte weergave is nietszeggend). In dit proefschrift worden twee mogelijke oplossingen aangedragen voor het degeneratieprobleem. De meest algemene oplossing gebruikt een penaltyfunctie, die straft indien de oplossing dreigt te degenereren. Het algoritme is gebruikt voor de implementatie van PREFSCAL, het ontvouwingsprogramma van IBM SPSS STATISTICS. Met de controle over het degeneratieprobleem is de weg vrij gemaakt om het ontvouwingsmodel verder te ontwikkelen: extra, verklarende variabelen kunnen worden toegevoegd voor interpretatie en het doen van voorspellingen. De mate waarin gegevens mogen ontbreken zonder een doorslaggevende invloed te hebben op de eindoplossing, de afbeelding, is ook uitgebreid onderzocht.LEI Universiteit LeidenMultivariate analysis of psychological data - ou

    Structured Low Rank Matrix Optimization Problems: A Penalty Approach

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    The LIBOR Market Model

    Get PDF
    Student Number : 0003819T - MSc dissertation - School of Computational and Applied Mathematics - Faculty of ScienceThe over-the-counter (OTC) interest rate derivative market is large and rapidly developing. In March 2005, the Bank for International Settlements published its “Triennial Central Bank Survey” which examined the derivative market activity in 2004 (http://www.bis.org/publ/rpfx05.htm). The reported total gross market value of OTC derivatives stood at $6.4 trillion at the end of June 2004. The gross market value of interest rate derivatives comprised a massive 71.7% of the total, followed by foreign exchange derivatives (17.5%) and equity derivatives (5%). Further, the daily turnover in interest rate option trading increased from 5.9% (of the total daily turnover in the interest rate derivative market) in April 2001 to 16.7% in April 2004. This growth and success of the interest rate derivative market has resulted in the introduction of exotic interest rate products and the ongoing search for accurate and efficient pricing and hedging techniques for them. Interest rate caps and (European) swaptions form the largest and the most liquid part of the interest rate option market. These vanilla instruments depend only on the level of the yield curve. The market standard for pricing them is the Black (1976) model. Caps and swaptions are typically used by traders of interest rate derivatives to gamma and vega hedge complex products. Thus an important feature of an interest rate model is not only its ability to recover an arbitrary input yield curve, but also an ability to calibrate to the implied at-the-money cap and swaption volatilities. The LIBOR market model developed out of the market’s need to price and hedge exotic interest rate derivatives consistently with the Black (1976) caplet formula. The focus of this dissertation is this popular class of interest rate models. The fundamental traded assets in an interest rate model are zero-coupon bonds. The evolution of their values, assuming that the underlying movements are continuous, is driven by a finite number of Brownian motions. The traditional approach to modelling the term structure of interest rates is to postulate the evolution of the instantaneous short or forward rates. Contrastingly, in the LIBOR market model, the discrete forward rates are modelled directly. The additional assumption imposed is that the volatility function of the discrete forward rates is a deterministic function of time. In Chapter 2 we provide a brief overview of the history of interest rate modelling which led to the LIBOR market model. The general theory of derivative pricing is presented, followed by a exposition and derivation of the stochastic differential equations governing the forward LIBOR rates. The LIBOR market model framework only truly becomes a model once the volatility functions of the discrete forward rates are specified. The information provided by the yield curve, the cap and the swaption markets does not imply a unique form for these functions. In Chapter 3, we examine various specifications of the LIBOR market model. Once the model is specified, it is calibrated to the above mentioned market data. An advantage of the LIBOR market model is the ability to calibrate to a large set of liquid market instruments while generating a realistic evolution of the forward rate volatility structure (Piterbarg 2004). We examine some of the practical problems that arise when calibrating the market model and present an example calibration in the UK market. The necessity, in general, of pricing derivatives in the LIBOR market model using Monte Carlo simulation is explained in Chapter 4. Both the Monte Carlo and quasi-Monte Carlo simulation approaches are presented, together with an examination of the various discretizations of the forward rate stochastic differential equations. The chapter concludes with some numerical results comparing the performance of Monte Carlo estimates with quasi-Monte Carlo estimates and the performance of the discretization approaches. In the final chapter we discuss numerical techniques based on Monte Carlo simulation for pricing American derivatives. We present the primal and dual American option pricing problem formulations, followed by an overview of the two main numerical techniques for pricing American options using Monte Carlo simulation. Callable LIBOR exotics is a name given to a class of interest rate derivatives that have early exercise provisions (Bermudan style) to exercise into various underlying interest rate products. A popular approach for valuing these instruments in the LIBOR market model is to estimate the continuation value of the option using parametric regression and, subsequently, to estimate the option value using backward induction. This approach relies on the choice of relevant, i.e. problem specific predictor variables and also on the functional form of the regression function. It is certainly not a “black-box” type of approach. Instead of choosing the relevant predictor variables, we present the sliced inverse regression technique. Sliced inverse regression is a statistical technique that aims to capture the main features of the data with a few low-dimensional projections. In particular, we use the sliced inverse regression technique to identify the low-dimensional projections of the forward LIBOR rates and then we estimate the continuation value of the option using nonparametric regression techniques. The results for a Bermudan swaption in a two-factor LIBOR market model are compared to those in Andersen (2000)

    Electron Beam X-Ray Computed Tomography for Multiphase Flows and An Experimental Study of Inter-channel Mixing

    Full text link
    This thesis consists of two parts. In the first, a high speed X-ray Computed Tomography (CT) system for multiphase flows is developed. X-ray Computed Tomography (CT) has been employed in the study of multiphase flows. The systems developed to date often have excellent spatial resolution at the expense of poor temporal resolution. Hence, X-ray CT has mostly been employed to examining time averaged phase distributions. In the present work, we report on the development of a Scanning Electron Beam X-ray Tomography (SEBXT) CT system that will allow for much higher time resolution with acceptable spatial resolution. The designed system, however, can have issues such as beam-hardening and limited angle artifacts. In the present study, we developed a high speed, limited angle SEBXT system along with a new CT reconstruction algorithm designed to enhance the CT reconstruction results of such system. To test the performance of the CT system, we produced example CT reconstruction results for two test phantoms based on the actual measured sinograms and the simulated sinograms. The second part examines, the process by which fluid mixes between two parallel flow channels through a narrow gap. This flow is a canonical representation of the mixing and mass transfer processes that often occur in thermo-hydraulic systems. The mixing can be strongly related to the presence of large-scale periodic flow structures that form within the gap. In the present work, we have developed an experimental setup to examine the single-phase mixing through the narrow rectangular gaps connecting two rectangular channels. Our goal is to elucidate the underlying flow processes responsible for inter-channel mixing, and to produce high-fidelity data for validation of computational models. Dye concentration measurements were used to determine the time average rate of mixing. Particle Imaging Velocimetry (PIV) was used to measure the flow fields within the gap. A Proper Orthogonal Decomposition (POD) of the PIV flow fields revealed the presence of coherent flow structure. The decomposed flow fields were then used to predict the time averaged mixing, which closely matched the experimentally measured values.PHDNaval Architecture & Marine EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/138666/1/seongjin_2.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/138666/2/seongjin_1.pd

    An overview of sparse convex optimization

    Get PDF
    Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. Optimization is seeking values of a variable that leads to an optimal value of the function that is to be optimized. Suppose we have a system of equations where there more unknowns than the equations. This type of system leads to an infinitely many solution. If one has prior knowledge that the solution is sparse this problem can be treated as an optimization problem. In this mini-dissertation we will discuss the convex algorithms for finding sparse solution. We use convex algorithm are chosen since they are relatively easy to implement. The class of methods we will discuss are convex relaxation, greedy algorithms and iterative thresholding. We will then compare this algorithms by applying them to a Sudoku problem.Dissertation (MSc)--University of Pretoria, 2018.CAIR and STATOMETStatisticsMScUnrestricte
    corecore