2,500 research outputs found

    Visualizing probabilistic models: Intensive Principal Component Analysis

    Full text link
    Unsupervised learning makes manifest the underlying structure of data without curated training and specific problem definitions. However, the inference of relationships between data points is frustrated by the `curse of dimensionality' in high-dimensions. Inspired by replica theory from statistical mechanics, we consider replicas of the system to tune the dimensionality and take the limit as the number of replicas goes to zero. The result is the intensive embedding, which is not only isometric (preserving local distances) but allows global structure to be more transparently visualized. We develop the Intensive Principal Component Analysis (InPCA) and demonstrate clear improvements in visualizations of the Ising model of magnetic spins, a neural network, and the dark energy cold dark matter ({\Lambda}CDM) model as applied to the Cosmic Microwave Background.Comment: 6 pages, 5 figure

    Augmented Slepians: Bandlimited Functions that Counterbalance Energy in Selected Intervals

    Full text link
    Slepian functions provide a solution to the optimization problem of joint time-frequency localization. Here, this concept is extended by using a generalized optimization criterion that favors energy concentration in one interval while penalizing energy in another interval, leading to the "augmented" Slepian functions. Mathematical foundations together with examples are presented in order to illustrate the most interesting properties that these generalized Slepian functions show. Also the relevance of this novel energy-concentration criterion is discussed along with some of its applications

    An Exploratory Exercise in Taguchi Analysis of Design Parameters: Application to a Shuttle-to-space Station Automated Approach Control System

    Get PDF
    The chief goals of the summer project have been twofold - first, for my host group and myself to learn as much of the working details of Taguchi analysis as possible in the time allotted, and, secondly, to apply the methodology to a design problem with the intention of establishing a preliminary set of near-optimal (in the sense of producing a desired response) design parameter values from among a large number of candidate factor combinations. The selected problem is concerned with determining design factor settings for an automated approach program which is to have the capability of guiding the Shuttle into the docking port of the Space Station under controlled conditions so as to meet and/or optimize certain target criteria. The candidate design parameters under study were glide path (i.e., approach) angle, path intercept and approach gains, and minimum impulse bit mode (a parameter which defines how Shuttle jets shall be fired). Several performance criteria were of concern: terminal relative velocity at the instant the two spacecraft are mated; docking offset; number of Shuttle jet firings in certain specified directions (of interest due to possible plume impingement on the Station's solar arrays), and total RCS (a measure of the energy expended in performing the approach/docking maneuver). In the material discussed here, we have focused on single performance criteria - total RCS. An analysis of the possibility of employing a multiobjective function composed of a weighted sum of the various individual criteria has been undertaken, but is, at this writing, incomplete. Results from the Taguchi statistical analysis indicate that only three of the original four posited factors are significant in affecting RCS response. A comparison of model simulation output (via Monte Carlo) with predictions based on estimated factor effects inferred through the Taguchi experiment array data suggested acceptable or close agreement between the two except at the predicted optimum point, where a difference outside a rule-of-thumb bound was observed. We have concluded that there is most likely an interaction effect not provided for in the original orthogonal array selected as the basis for our experimental design. However, we feel that the data indicates that this interaction is a mild one and that inclusion of its effect will not alter the location of the optimum

    Comparing algorithms and criteria for designing Bayesian conjoint choice experiments.

    Get PDF
    The recent algorithm to find efficient conjoint choice designs, the RSC-algorithm developed by Sándor and Wedel (2001), uses Bayesian design methods that integrate the D-optimality criterion over a prior distribution of likely parameter values. Characteristic for this algorithm is that the designs satisfy the minimal level overlap property provided the starting design complies with it. Another, more embedded, algorithm in the literature, developed by Zwerina et al. (1996), involves an adaptation of the modified Fedorov exchange algorithm to the multinomial logit choice model. However, it does not take into account the uncertainty about the assumed parameter values. In this paper, we adjust the modified Fedorov choice algorithm in a Bayesian fashion and compare its designs to those produced by the RSC-algorithm. Additionally, we introduce a measure to investigate the utility balances of the designs. Besides the widely used D-optimality criterion, we also implement the A-, G- and V-optimality criteria and look for the criterion that is most suitable for prediction purposes and that offers the best quality in terms of computational effectiveness. The comparison study reveals that the Bayesian modified Fedorov choice algorithm provides more efficient designs than the RSC-algorithm and that the Dand V-optimality criteria are the best criteria for prediction, but the computation time with the V-optimality criterion is longer.A-Optimality; Algorithms; Bayesian design; Bayesian modified Fedorov choice algorithm; Choice; Conjoint choice experiments; Criteria; D-Optimality; Design; Discrete choice experiments; Distribution; Effectiveness; Fashion; G-optimality; Logit; Methods; Model; Multinomial logit; Predictive validity; Quality; Research; RSC-algorithm; Studies; Time; Uncertainty; V-optimality; Value;

    Achieving Constraints in Neural Networks: A Stochastic Augmented Lagrangian Approach

    Full text link
    Regularizing Deep Neural Networks (DNNs) is essential for improving generalizability and preventing overfitting. Fixed penalty methods, though common, lack adaptability and suffer from hyperparameter sensitivity. In this paper, we propose a novel approach to DNN regularization by framing the training process as a constrained optimization problem. Where the data fidelity term is the minimization objective and the regularization terms serve as constraints. Then, we employ the Stochastic Augmented Lagrangian (SAL) method to achieve a more flexible and efficient regularization mechanism. Our approach extends beyond black-box regularization, demonstrating significant improvements in white-box models, where weights are often subject to hard constraints to ensure interpretability. Experimental results on image-based classification on MNIST, CIFAR10, and CIFAR100 datasets validate the effectiveness of our approach. SAL consistently achieves higher Accuracy while also achieving better constraint satisfaction, thus showcasing its potential for optimizing DNNs under constrained settings
    • …
    corecore